00:00:00.000 Started by upstream project "autotest-per-patch" build number 132556 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.116 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.116 The recommended git tool is: git 00:00:00.117 using credential 00000000-0000-0000-0000-000000000002 00:00:00.119 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.159 Fetching changes from the remote Git repository 00:00:00.161 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.199 Using shallow fetch with depth 1 00:00:00.199 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.199 > git --version # timeout=10 00:00:00.233 > git --version # 'git version 2.39.2' 00:00:00.233 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.263 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.263 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.631 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.644 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.656 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.656 > git config core.sparsecheckout # timeout=10 00:00:05.668 > git read-tree -mu HEAD # timeout=10 00:00:05.684 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.710 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.710 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.818 [Pipeline] Start of Pipeline 00:00:05.833 [Pipeline] library 00:00:05.835 Loading library shm_lib@master 00:00:05.835 Library shm_lib@master is cached. Copying from home. 00:00:05.855 [Pipeline] node 00:00:05.863 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:05.867 [Pipeline] { 00:00:05.877 [Pipeline] catchError 00:00:05.879 [Pipeline] { 00:00:05.893 [Pipeline] wrap 00:00:05.903 [Pipeline] { 00:00:05.913 [Pipeline] stage 00:00:05.915 [Pipeline] { (Prologue) 00:00:05.932 [Pipeline] echo 00:00:05.934 Node: VM-host-SM17 00:00:05.939 [Pipeline] cleanWs 00:00:05.948 [WS-CLEANUP] Deleting project workspace... 00:00:05.948 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.954 [WS-CLEANUP] done 00:00:06.203 [Pipeline] setCustomBuildProperty 00:00:06.323 [Pipeline] httpRequest 00:00:06.668 [Pipeline] echo 00:00:06.669 Sorcerer 10.211.164.20 is alive 00:00:06.678 [Pipeline] retry 00:00:06.679 [Pipeline] { 00:00:06.691 [Pipeline] httpRequest 00:00:06.696 HttpMethod: GET 00:00:06.696 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.697 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.708 Response Code: HTTP/1.1 200 OK 00:00:06.709 Success: Status code 200 is in the accepted range: 200,404 00:00:06.709 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.852 [Pipeline] } 00:00:11.869 [Pipeline] // retry 00:00:11.876 [Pipeline] sh 00:00:12.155 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.168 [Pipeline] httpRequest 00:00:12.541 [Pipeline] echo 00:00:12.542 Sorcerer 10.211.164.20 is alive 00:00:12.552 [Pipeline] retry 00:00:12.554 [Pipeline] { 00:00:12.569 [Pipeline] httpRequest 00:00:12.573 HttpMethod: GET 00:00:12.574 URL: http://10.211.164.20/packages/spdk_5ca6db5da678c45c31ba80e10cce316a7c76e479.tar.gz 00:00:12.574 Sending request to url: http://10.211.164.20/packages/spdk_5ca6db5da678c45c31ba80e10cce316a7c76e479.tar.gz 00:00:12.576 Response Code: HTTP/1.1 200 OK 00:00:12.577 Success: Status code 200 is in the accepted range: 200,404 00:00:12.577 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_5ca6db5da678c45c31ba80e10cce316a7c76e479.tar.gz 00:00:29.788 [Pipeline] } 00:00:29.803 [Pipeline] // retry 00:00:29.811 [Pipeline] sh 00:00:30.116 + tar --no-same-owner -xf spdk_5ca6db5da678c45c31ba80e10cce316a7c76e479.tar.gz 00:00:33.418 [Pipeline] sh 00:00:33.698 + git -C spdk log --oneline -n5 00:00:33.698 5ca6db5da nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:00:33.698 f7ce15267 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:00:33.698 aa58c9e0b dif: Add spdk_dif_pi_format_get_size() to use for NVMe PRACT 00:00:33.698 e93f0f941 bdev/malloc: Support accel sequence when DIF is enabled 00:00:33.698 27c6508ea bdev: Add spdk_bdev_io_hide_metadata() for bdev modules 00:00:33.716 [Pipeline] writeFile 00:00:33.731 [Pipeline] sh 00:00:34.010 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:34.020 [Pipeline] sh 00:00:34.295 + cat autorun-spdk.conf 00:00:34.295 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.295 SPDK_TEST_NVMF=1 00:00:34.295 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:34.295 SPDK_TEST_URING=1 00:00:34.295 SPDK_TEST_USDT=1 00:00:34.295 SPDK_RUN_UBSAN=1 00:00:34.295 NET_TYPE=virt 00:00:34.295 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:34.302 RUN_NIGHTLY=0 00:00:34.305 [Pipeline] } 00:00:34.319 [Pipeline] // stage 00:00:34.333 [Pipeline] stage 00:00:34.336 [Pipeline] { (Run VM) 00:00:34.348 [Pipeline] sh 00:00:34.629 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:34.629 + echo 'Start stage prepare_nvme.sh' 00:00:34.629 Start stage prepare_nvme.sh 00:00:34.629 + [[ -n 0 ]] 00:00:34.629 + disk_prefix=ex0 00:00:34.629 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:34.629 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:34.629 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:34.629 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.629 ++ SPDK_TEST_NVMF=1 00:00:34.629 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:34.629 ++ SPDK_TEST_URING=1 00:00:34.629 ++ SPDK_TEST_USDT=1 00:00:34.629 ++ SPDK_RUN_UBSAN=1 00:00:34.629 ++ NET_TYPE=virt 00:00:34.629 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:34.629 ++ RUN_NIGHTLY=0 00:00:34.629 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:34.629 + nvme_files=() 00:00:34.629 + declare -A nvme_files 00:00:34.629 + backend_dir=/var/lib/libvirt/images/backends 00:00:34.629 + nvme_files['nvme.img']=5G 00:00:34.629 + nvme_files['nvme-cmb.img']=5G 00:00:34.629 + nvme_files['nvme-multi0.img']=4G 00:00:34.629 + nvme_files['nvme-multi1.img']=4G 00:00:34.629 + nvme_files['nvme-multi2.img']=4G 00:00:34.629 + nvme_files['nvme-openstack.img']=8G 00:00:34.629 + nvme_files['nvme-zns.img']=5G 00:00:34.629 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:34.629 + (( SPDK_TEST_FTL == 1 )) 00:00:34.630 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:34.630 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:34.630 + for nvme in "${!nvme_files[@]}" 00:00:34.630 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:00:34.630 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:34.630 + for nvme in "${!nvme_files[@]}" 00:00:34.630 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:00:34.630 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:34.630 + for nvme in "${!nvme_files[@]}" 00:00:34.630 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:00:34.630 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:34.630 + for nvme in "${!nvme_files[@]}" 00:00:34.630 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:00:34.630 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:34.630 + for nvme in "${!nvme_files[@]}" 00:00:34.630 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:00:34.630 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:34.630 + for nvme in "${!nvme_files[@]}" 00:00:34.630 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:00:34.630 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:34.630 + for nvme in "${!nvme_files[@]}" 00:00:34.630 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:00:35.567 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:35.567 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:00:35.567 + echo 'End stage prepare_nvme.sh' 00:00:35.567 End stage prepare_nvme.sh 00:00:35.578 [Pipeline] sh 00:00:35.893 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:35.893 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:00:35.893 00:00:35.893 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:35.893 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:35.893 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:35.893 HELP=0 00:00:35.893 DRY_RUN=0 00:00:35.893 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:00:35.893 NVME_DISKS_TYPE=nvme,nvme, 00:00:35.893 NVME_AUTO_CREATE=0 00:00:35.893 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:00:35.893 NVME_CMB=,, 00:00:35.893 NVME_PMR=,, 00:00:35.893 NVME_ZNS=,, 00:00:35.893 NVME_MS=,, 00:00:35.893 NVME_FDP=,, 00:00:35.893 SPDK_VAGRANT_DISTRO=fedora39 00:00:35.893 SPDK_VAGRANT_VMCPU=10 00:00:35.893 SPDK_VAGRANT_VMRAM=12288 00:00:35.893 SPDK_VAGRANT_PROVIDER=libvirt 00:00:35.893 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:35.893 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:35.893 SPDK_OPENSTACK_NETWORK=0 00:00:35.893 VAGRANT_PACKAGE_BOX=0 00:00:35.893 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:35.893 FORCE_DISTRO=true 00:00:35.893 VAGRANT_BOX_VERSION= 00:00:35.893 EXTRA_VAGRANTFILES= 00:00:35.893 NIC_MODEL=e1000 00:00:35.893 00:00:35.893 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:00:35.893 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:39.181 Bringing machine 'default' up with 'libvirt' provider... 00:00:39.748 ==> default: Creating image (snapshot of base box volume). 00:00:40.007 ==> default: Creating domain with the following settings... 00:00:40.007 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732652499_bde7ba72dd460ff167a3 00:00:40.007 ==> default: -- Domain type: kvm 00:00:40.007 ==> default: -- Cpus: 10 00:00:40.007 ==> default: -- Feature: acpi 00:00:40.007 ==> default: -- Feature: apic 00:00:40.007 ==> default: -- Feature: pae 00:00:40.007 ==> default: -- Memory: 12288M 00:00:40.007 ==> default: -- Memory Backing: hugepages: 00:00:40.007 ==> default: -- Management MAC: 00:00:40.007 ==> default: -- Loader: 00:00:40.007 ==> default: -- Nvram: 00:00:40.007 ==> default: -- Base box: spdk/fedora39 00:00:40.007 ==> default: -- Storage pool: default 00:00:40.007 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732652499_bde7ba72dd460ff167a3.img (20G) 00:00:40.007 ==> default: -- Volume Cache: default 00:00:40.007 ==> default: -- Kernel: 00:00:40.007 ==> default: -- Initrd: 00:00:40.007 ==> default: -- Graphics Type: vnc 00:00:40.007 ==> default: -- Graphics Port: -1 00:00:40.007 ==> default: -- Graphics IP: 127.0.0.1 00:00:40.007 ==> default: -- Graphics Password: Not defined 00:00:40.007 ==> default: -- Video Type: cirrus 00:00:40.007 ==> default: -- Video VRAM: 9216 00:00:40.007 ==> default: -- Sound Type: 00:00:40.007 ==> default: -- Keymap: en-us 00:00:40.007 ==> default: -- TPM Path: 00:00:40.007 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:40.007 ==> default: -- Command line args: 00:00:40.007 ==> default: -> value=-device, 00:00:40.007 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:40.007 ==> default: -> value=-drive, 00:00:40.008 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:00:40.008 ==> default: -> value=-device, 00:00:40.008 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.008 ==> default: -> value=-device, 00:00:40.008 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:40.008 ==> default: -> value=-drive, 00:00:40.008 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:40.008 ==> default: -> value=-device, 00:00:40.008 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.008 ==> default: -> value=-drive, 00:00:40.008 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:40.008 ==> default: -> value=-device, 00:00:40.008 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.008 ==> default: -> value=-drive, 00:00:40.008 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:40.008 ==> default: -> value=-device, 00:00:40.008 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.008 ==> default: Creating shared folders metadata... 00:00:40.008 ==> default: Starting domain. 00:00:41.386 ==> default: Waiting for domain to get an IP address... 00:00:59.474 ==> default: Waiting for SSH to become available... 00:00:59.474 ==> default: Configuring and enabling network interfaces... 00:01:02.006 default: SSH address: 192.168.121.5:22 00:01:02.006 default: SSH username: vagrant 00:01:02.006 default: SSH auth method: private key 00:01:04.550 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:12.663 ==> default: Mounting SSHFS shared folder... 00:01:13.230 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:13.230 ==> default: Checking Mount.. 00:01:14.608 ==> default: Folder Successfully Mounted! 00:01:14.608 ==> default: Running provisioner: file... 00:01:15.218 default: ~/.gitconfig => .gitconfig 00:01:15.787 00:01:15.787 SUCCESS! 00:01:15.787 00:01:15.787 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:15.787 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:15.787 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:15.787 00:01:15.796 [Pipeline] } 00:01:15.812 [Pipeline] // stage 00:01:15.825 [Pipeline] dir 00:01:15.826 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:15.828 [Pipeline] { 00:01:15.844 [Pipeline] catchError 00:01:15.846 [Pipeline] { 00:01:15.859 [Pipeline] sh 00:01:16.141 + vagrant ssh-config --host vagrant 00:01:16.141 + sed -ne /^Host/,$p 00:01:16.141 + tee ssh_conf 00:01:20.331 Host vagrant 00:01:20.331 HostName 192.168.121.5 00:01:20.331 User vagrant 00:01:20.331 Port 22 00:01:20.331 UserKnownHostsFile /dev/null 00:01:20.331 StrictHostKeyChecking no 00:01:20.331 PasswordAuthentication no 00:01:20.331 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:20.331 IdentitiesOnly yes 00:01:20.331 LogLevel FATAL 00:01:20.331 ForwardAgent yes 00:01:20.331 ForwardX11 yes 00:01:20.331 00:01:20.345 [Pipeline] withEnv 00:01:20.347 [Pipeline] { 00:01:20.363 [Pipeline] sh 00:01:20.729 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:20.730 source /etc/os-release 00:01:20.730 [[ -e /image.version ]] && img=$(< /image.version) 00:01:20.730 # Minimal, systemd-like check. 00:01:20.730 if [[ -e /.dockerenv ]]; then 00:01:20.730 # Clear garbage from the node's name: 00:01:20.730 # agt-er_autotest_547-896 -> autotest_547-896 00:01:20.730 # $HOSTNAME is the actual container id 00:01:20.730 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:20.730 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:20.730 # We can assume this is a mount from a host where container is running, 00:01:20.730 # so fetch its hostname to easily identify the target swarm worker. 00:01:20.730 container="$(< /etc/hostname) ($agent)" 00:01:20.730 else 00:01:20.730 # Fallback 00:01:20.730 container=$agent 00:01:20.730 fi 00:01:20.730 fi 00:01:20.730 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:20.730 00:01:20.740 [Pipeline] } 00:01:20.757 [Pipeline] // withEnv 00:01:20.766 [Pipeline] setCustomBuildProperty 00:01:20.785 [Pipeline] stage 00:01:20.787 [Pipeline] { (Tests) 00:01:20.808 [Pipeline] sh 00:01:21.093 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:21.366 [Pipeline] sh 00:01:21.647 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:21.664 [Pipeline] timeout 00:01:21.665 Timeout set to expire in 1 hr 0 min 00:01:21.667 [Pipeline] { 00:01:21.684 [Pipeline] sh 00:01:21.967 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:22.535 HEAD is now at 5ca6db5da nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:01:22.547 [Pipeline] sh 00:01:22.829 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:23.103 [Pipeline] sh 00:01:23.384 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:23.403 [Pipeline] sh 00:01:23.687 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:23.946 ++ readlink -f spdk_repo 00:01:23.946 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:23.946 + [[ -n /home/vagrant/spdk_repo ]] 00:01:23.946 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:23.946 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:23.946 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:23.946 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:23.946 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:23.946 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:23.946 + cd /home/vagrant/spdk_repo 00:01:23.946 + source /etc/os-release 00:01:23.946 ++ NAME='Fedora Linux' 00:01:23.946 ++ VERSION='39 (Cloud Edition)' 00:01:23.946 ++ ID=fedora 00:01:23.946 ++ VERSION_ID=39 00:01:23.946 ++ VERSION_CODENAME= 00:01:23.946 ++ PLATFORM_ID=platform:f39 00:01:23.946 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:23.946 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:23.946 ++ LOGO=fedora-logo-icon 00:01:23.946 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:23.946 ++ HOME_URL=https://fedoraproject.org/ 00:01:23.946 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:23.946 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:23.946 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:23.946 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:23.946 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:23.946 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:23.946 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:23.946 ++ SUPPORT_END=2024-11-12 00:01:23.946 ++ VARIANT='Cloud Edition' 00:01:23.946 ++ VARIANT_ID=cloud 00:01:23.946 + uname -a 00:01:23.946 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:23.946 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:24.204 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:24.204 Hugepages 00:01:24.204 node hugesize free / total 00:01:24.204 node0 1048576kB 0 / 0 00:01:24.204 node0 2048kB 0 / 0 00:01:24.204 00:01:24.204 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:24.204 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:24.463 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:24.463 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:24.463 + rm -f /tmp/spdk-ld-path 00:01:24.463 + source autorun-spdk.conf 00:01:24.463 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.463 ++ SPDK_TEST_NVMF=1 00:01:24.463 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.463 ++ SPDK_TEST_URING=1 00:01:24.463 ++ SPDK_TEST_USDT=1 00:01:24.463 ++ SPDK_RUN_UBSAN=1 00:01:24.463 ++ NET_TYPE=virt 00:01:24.463 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:24.463 ++ RUN_NIGHTLY=0 00:01:24.463 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:24.463 + [[ -n '' ]] 00:01:24.463 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:24.463 + for M in /var/spdk/build-*-manifest.txt 00:01:24.463 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:24.463 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:24.463 + for M in /var/spdk/build-*-manifest.txt 00:01:24.463 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:24.463 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:24.463 + for M in /var/spdk/build-*-manifest.txt 00:01:24.463 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:24.463 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:24.463 ++ uname 00:01:24.463 + [[ Linux == \L\i\n\u\x ]] 00:01:24.463 + sudo dmesg -T 00:01:24.463 + sudo dmesg --clear 00:01:24.463 + dmesg_pid=5198 00:01:24.463 + sudo dmesg -Tw 00:01:24.463 + [[ Fedora Linux == FreeBSD ]] 00:01:24.463 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.463 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.463 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:24.463 + [[ -x /usr/src/fio-static/fio ]] 00:01:24.463 + export FIO_BIN=/usr/src/fio-static/fio 00:01:24.463 + FIO_BIN=/usr/src/fio-static/fio 00:01:24.463 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:24.463 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:24.463 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:24.463 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:24.463 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:24.463 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:24.463 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:24.463 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:24.463 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:24.463 20:22:24 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:24.463 20:22:24 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:24.463 20:22:24 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.463 20:22:24 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:24.463 20:22:24 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.463 20:22:24 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:01:24.463 20:22:24 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:01:24.463 20:22:24 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:24.463 20:22:24 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:01:24.463 20:22:24 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:24.463 20:22:24 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:24.463 20:22:24 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:24.463 20:22:24 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:24.722 20:22:24 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:24.722 20:22:24 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:24.722 20:22:24 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:24.722 20:22:24 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:24.722 20:22:24 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:24.722 20:22:24 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:24.723 20:22:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.723 20:22:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.723 20:22:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.723 20:22:24 -- paths/export.sh@5 -- $ export PATH 00:01:24.723 20:22:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.723 20:22:24 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:24.723 20:22:24 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:24.723 20:22:24 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732652544.XXXXXX 00:01:24.723 20:22:24 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732652544.WZqUTz 00:01:24.723 20:22:24 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:24.723 20:22:24 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:24.723 20:22:24 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:24.723 20:22:24 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:24.723 20:22:24 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:24.723 20:22:24 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:24.723 20:22:24 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:24.723 20:22:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.723 20:22:24 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:24.723 20:22:24 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:24.723 20:22:24 -- pm/common@17 -- $ local monitor 00:01:24.723 20:22:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.723 20:22:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.723 20:22:24 -- pm/common@25 -- $ sleep 1 00:01:24.723 20:22:24 -- pm/common@21 -- $ date +%s 00:01:24.723 20:22:24 -- pm/common@21 -- $ date +%s 00:01:24.723 20:22:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732652544 00:01:24.723 20:22:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732652544 00:01:24.723 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732652544_collect-cpu-load.pm.log 00:01:24.723 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732652544_collect-vmstat.pm.log 00:01:25.661 20:22:25 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:25.661 20:22:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:25.661 20:22:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:25.661 20:22:25 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:25.661 20:22:25 -- spdk/autobuild.sh@16 -- $ date -u 00:01:25.661 Tue Nov 26 08:22:25 PM UTC 2024 00:01:25.661 20:22:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:25.661 v25.01-pre-269-g5ca6db5da 00:01:25.661 20:22:25 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:25.661 20:22:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:25.661 20:22:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:25.661 20:22:25 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:25.661 20:22:25 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:25.661 20:22:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.661 ************************************ 00:01:25.661 START TEST ubsan 00:01:25.661 ************************************ 00:01:25.661 using ubsan 00:01:25.661 20:22:25 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:25.661 00:01:25.661 real 0m0.000s 00:01:25.661 user 0m0.000s 00:01:25.661 sys 0m0.000s 00:01:25.661 20:22:25 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:25.661 20:22:25 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:25.661 ************************************ 00:01:25.661 END TEST ubsan 00:01:25.661 ************************************ 00:01:25.661 20:22:25 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:25.661 20:22:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:25.661 20:22:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:25.661 20:22:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:25.661 20:22:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:25.661 20:22:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:25.661 20:22:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:25.661 20:22:25 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:25.661 20:22:25 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:25.920 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:25.920 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:26.180 Using 'verbs' RDMA provider 00:01:42.013 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:54.276 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:54.276 Creating mk/config.mk...done. 00:01:54.276 Creating mk/cc.flags.mk...done. 00:01:54.276 Type 'make' to build. 00:01:54.276 20:22:53 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:54.276 20:22:53 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:54.277 20:22:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:54.277 20:22:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.277 ************************************ 00:01:54.277 START TEST make 00:01:54.277 ************************************ 00:01:54.277 20:22:53 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:54.277 make[1]: Nothing to be done for 'all'. 00:02:06.480 The Meson build system 00:02:06.480 Version: 1.5.0 00:02:06.480 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:06.480 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:06.480 Build type: native build 00:02:06.480 Program cat found: YES (/usr/bin/cat) 00:02:06.480 Project name: DPDK 00:02:06.480 Project version: 24.03.0 00:02:06.480 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:06.480 C linker for the host machine: cc ld.bfd 2.40-14 00:02:06.480 Host machine cpu family: x86_64 00:02:06.480 Host machine cpu: x86_64 00:02:06.480 Message: ## Building in Developer Mode ## 00:02:06.480 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:06.480 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:06.480 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:06.480 Program python3 found: YES (/usr/bin/python3) 00:02:06.480 Program cat found: YES (/usr/bin/cat) 00:02:06.480 Compiler for C supports arguments -march=native: YES 00:02:06.480 Checking for size of "void *" : 8 00:02:06.480 Checking for size of "void *" : 8 (cached) 00:02:06.480 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:06.480 Library m found: YES 00:02:06.480 Library numa found: YES 00:02:06.480 Has header "numaif.h" : YES 00:02:06.480 Library fdt found: NO 00:02:06.480 Library execinfo found: NO 00:02:06.480 Has header "execinfo.h" : YES 00:02:06.480 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:06.480 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:06.480 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:06.481 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:06.481 Run-time dependency openssl found: YES 3.1.1 00:02:06.481 Run-time dependency libpcap found: YES 1.10.4 00:02:06.481 Has header "pcap.h" with dependency libpcap: YES 00:02:06.481 Compiler for C supports arguments -Wcast-qual: YES 00:02:06.481 Compiler for C supports arguments -Wdeprecated: YES 00:02:06.481 Compiler for C supports arguments -Wformat: YES 00:02:06.481 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:06.481 Compiler for C supports arguments -Wformat-security: NO 00:02:06.481 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:06.481 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:06.481 Compiler for C supports arguments -Wnested-externs: YES 00:02:06.481 Compiler for C supports arguments -Wold-style-definition: YES 00:02:06.481 Compiler for C supports arguments -Wpointer-arith: YES 00:02:06.481 Compiler for C supports arguments -Wsign-compare: YES 00:02:06.481 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:06.481 Compiler for C supports arguments -Wundef: YES 00:02:06.481 Compiler for C supports arguments -Wwrite-strings: YES 00:02:06.481 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:06.481 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:06.481 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:06.481 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:06.481 Program objdump found: YES (/usr/bin/objdump) 00:02:06.481 Compiler for C supports arguments -mavx512f: YES 00:02:06.481 Checking if "AVX512 checking" compiles: YES 00:02:06.481 Fetching value of define "__SSE4_2__" : 1 00:02:06.481 Fetching value of define "__AES__" : 1 00:02:06.481 Fetching value of define "__AVX__" : 1 00:02:06.481 Fetching value of define "__AVX2__" : 1 00:02:06.481 Fetching value of define "__AVX512BW__" : (undefined) 00:02:06.481 Fetching value of define "__AVX512CD__" : (undefined) 00:02:06.481 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:06.481 Fetching value of define "__AVX512F__" : (undefined) 00:02:06.481 Fetching value of define "__AVX512VL__" : (undefined) 00:02:06.481 Fetching value of define "__PCLMUL__" : 1 00:02:06.481 Fetching value of define "__RDRND__" : 1 00:02:06.481 Fetching value of define "__RDSEED__" : 1 00:02:06.481 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:06.481 Fetching value of define "__znver1__" : (undefined) 00:02:06.481 Fetching value of define "__znver2__" : (undefined) 00:02:06.481 Fetching value of define "__znver3__" : (undefined) 00:02:06.481 Fetching value of define "__znver4__" : (undefined) 00:02:06.481 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:06.481 Message: lib/log: Defining dependency "log" 00:02:06.481 Message: lib/kvargs: Defining dependency "kvargs" 00:02:06.481 Message: lib/telemetry: Defining dependency "telemetry" 00:02:06.481 Checking for function "getentropy" : NO 00:02:06.481 Message: lib/eal: Defining dependency "eal" 00:02:06.481 Message: lib/ring: Defining dependency "ring" 00:02:06.481 Message: lib/rcu: Defining dependency "rcu" 00:02:06.481 Message: lib/mempool: Defining dependency "mempool" 00:02:06.481 Message: lib/mbuf: Defining dependency "mbuf" 00:02:06.481 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:06.481 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:06.481 Compiler for C supports arguments -mpclmul: YES 00:02:06.481 Compiler for C supports arguments -maes: YES 00:02:06.481 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:06.481 Compiler for C supports arguments -mavx512bw: YES 00:02:06.481 Compiler for C supports arguments -mavx512dq: YES 00:02:06.481 Compiler for C supports arguments -mavx512vl: YES 00:02:06.481 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:06.481 Compiler for C supports arguments -mavx2: YES 00:02:06.481 Compiler for C supports arguments -mavx: YES 00:02:06.481 Message: lib/net: Defining dependency "net" 00:02:06.481 Message: lib/meter: Defining dependency "meter" 00:02:06.481 Message: lib/ethdev: Defining dependency "ethdev" 00:02:06.481 Message: lib/pci: Defining dependency "pci" 00:02:06.481 Message: lib/cmdline: Defining dependency "cmdline" 00:02:06.481 Message: lib/hash: Defining dependency "hash" 00:02:06.481 Message: lib/timer: Defining dependency "timer" 00:02:06.481 Message: lib/compressdev: Defining dependency "compressdev" 00:02:06.481 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:06.481 Message: lib/dmadev: Defining dependency "dmadev" 00:02:06.481 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:06.481 Message: lib/power: Defining dependency "power" 00:02:06.481 Message: lib/reorder: Defining dependency "reorder" 00:02:06.481 Message: lib/security: Defining dependency "security" 00:02:06.481 Has header "linux/userfaultfd.h" : YES 00:02:06.481 Has header "linux/vduse.h" : YES 00:02:06.481 Message: lib/vhost: Defining dependency "vhost" 00:02:06.481 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:06.481 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:06.481 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:06.481 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:06.481 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:06.481 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:06.481 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:06.481 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:06.481 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:06.481 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:06.481 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:06.481 Configuring doxy-api-html.conf using configuration 00:02:06.481 Configuring doxy-api-man.conf using configuration 00:02:06.481 Program mandb found: YES (/usr/bin/mandb) 00:02:06.481 Program sphinx-build found: NO 00:02:06.481 Configuring rte_build_config.h using configuration 00:02:06.481 Message: 00:02:06.481 ================= 00:02:06.481 Applications Enabled 00:02:06.481 ================= 00:02:06.481 00:02:06.481 apps: 00:02:06.481 00:02:06.481 00:02:06.481 Message: 00:02:06.481 ================= 00:02:06.481 Libraries Enabled 00:02:06.481 ================= 00:02:06.481 00:02:06.481 libs: 00:02:06.481 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:06.481 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:06.481 cryptodev, dmadev, power, reorder, security, vhost, 00:02:06.481 00:02:06.481 Message: 00:02:06.481 =============== 00:02:06.481 Drivers Enabled 00:02:06.481 =============== 00:02:06.481 00:02:06.481 common: 00:02:06.481 00:02:06.481 bus: 00:02:06.481 pci, vdev, 00:02:06.481 mempool: 00:02:06.481 ring, 00:02:06.481 dma: 00:02:06.481 00:02:06.481 net: 00:02:06.481 00:02:06.481 crypto: 00:02:06.481 00:02:06.481 compress: 00:02:06.481 00:02:06.481 vdpa: 00:02:06.481 00:02:06.481 00:02:06.481 Message: 00:02:06.481 ================= 00:02:06.481 Content Skipped 00:02:06.481 ================= 00:02:06.481 00:02:06.481 apps: 00:02:06.481 dumpcap: explicitly disabled via build config 00:02:06.481 graph: explicitly disabled via build config 00:02:06.481 pdump: explicitly disabled via build config 00:02:06.481 proc-info: explicitly disabled via build config 00:02:06.481 test-acl: explicitly disabled via build config 00:02:06.481 test-bbdev: explicitly disabled via build config 00:02:06.481 test-cmdline: explicitly disabled via build config 00:02:06.481 test-compress-perf: explicitly disabled via build config 00:02:06.481 test-crypto-perf: explicitly disabled via build config 00:02:06.481 test-dma-perf: explicitly disabled via build config 00:02:06.481 test-eventdev: explicitly disabled via build config 00:02:06.481 test-fib: explicitly disabled via build config 00:02:06.481 test-flow-perf: explicitly disabled via build config 00:02:06.481 test-gpudev: explicitly disabled via build config 00:02:06.481 test-mldev: explicitly disabled via build config 00:02:06.481 test-pipeline: explicitly disabled via build config 00:02:06.481 test-pmd: explicitly disabled via build config 00:02:06.481 test-regex: explicitly disabled via build config 00:02:06.481 test-sad: explicitly disabled via build config 00:02:06.481 test-security-perf: explicitly disabled via build config 00:02:06.481 00:02:06.481 libs: 00:02:06.481 argparse: explicitly disabled via build config 00:02:06.481 metrics: explicitly disabled via build config 00:02:06.481 acl: explicitly disabled via build config 00:02:06.481 bbdev: explicitly disabled via build config 00:02:06.481 bitratestats: explicitly disabled via build config 00:02:06.481 bpf: explicitly disabled via build config 00:02:06.481 cfgfile: explicitly disabled via build config 00:02:06.481 distributor: explicitly disabled via build config 00:02:06.481 efd: explicitly disabled via build config 00:02:06.481 eventdev: explicitly disabled via build config 00:02:06.481 dispatcher: explicitly disabled via build config 00:02:06.481 gpudev: explicitly disabled via build config 00:02:06.481 gro: explicitly disabled via build config 00:02:06.481 gso: explicitly disabled via build config 00:02:06.481 ip_frag: explicitly disabled via build config 00:02:06.481 jobstats: explicitly disabled via build config 00:02:06.481 latencystats: explicitly disabled via build config 00:02:06.481 lpm: explicitly disabled via build config 00:02:06.481 member: explicitly disabled via build config 00:02:06.481 pcapng: explicitly disabled via build config 00:02:06.481 rawdev: explicitly disabled via build config 00:02:06.481 regexdev: explicitly disabled via build config 00:02:06.481 mldev: explicitly disabled via build config 00:02:06.481 rib: explicitly disabled via build config 00:02:06.481 sched: explicitly disabled via build config 00:02:06.481 stack: explicitly disabled via build config 00:02:06.481 ipsec: explicitly disabled via build config 00:02:06.481 pdcp: explicitly disabled via build config 00:02:06.481 fib: explicitly disabled via build config 00:02:06.481 port: explicitly disabled via build config 00:02:06.481 pdump: explicitly disabled via build config 00:02:06.481 table: explicitly disabled via build config 00:02:06.481 pipeline: explicitly disabled via build config 00:02:06.481 graph: explicitly disabled via build config 00:02:06.481 node: explicitly disabled via build config 00:02:06.481 00:02:06.481 drivers: 00:02:06.482 common/cpt: not in enabled drivers build config 00:02:06.482 common/dpaax: not in enabled drivers build config 00:02:06.482 common/iavf: not in enabled drivers build config 00:02:06.482 common/idpf: not in enabled drivers build config 00:02:06.482 common/ionic: not in enabled drivers build config 00:02:06.482 common/mvep: not in enabled drivers build config 00:02:06.482 common/octeontx: not in enabled drivers build config 00:02:06.482 bus/auxiliary: not in enabled drivers build config 00:02:06.482 bus/cdx: not in enabled drivers build config 00:02:06.482 bus/dpaa: not in enabled drivers build config 00:02:06.482 bus/fslmc: not in enabled drivers build config 00:02:06.482 bus/ifpga: not in enabled drivers build config 00:02:06.482 bus/platform: not in enabled drivers build config 00:02:06.482 bus/uacce: not in enabled drivers build config 00:02:06.482 bus/vmbus: not in enabled drivers build config 00:02:06.482 common/cnxk: not in enabled drivers build config 00:02:06.482 common/mlx5: not in enabled drivers build config 00:02:06.482 common/nfp: not in enabled drivers build config 00:02:06.482 common/nitrox: not in enabled drivers build config 00:02:06.482 common/qat: not in enabled drivers build config 00:02:06.482 common/sfc_efx: not in enabled drivers build config 00:02:06.482 mempool/bucket: not in enabled drivers build config 00:02:06.482 mempool/cnxk: not in enabled drivers build config 00:02:06.482 mempool/dpaa: not in enabled drivers build config 00:02:06.482 mempool/dpaa2: not in enabled drivers build config 00:02:06.482 mempool/octeontx: not in enabled drivers build config 00:02:06.482 mempool/stack: not in enabled drivers build config 00:02:06.482 dma/cnxk: not in enabled drivers build config 00:02:06.482 dma/dpaa: not in enabled drivers build config 00:02:06.482 dma/dpaa2: not in enabled drivers build config 00:02:06.482 dma/hisilicon: not in enabled drivers build config 00:02:06.482 dma/idxd: not in enabled drivers build config 00:02:06.482 dma/ioat: not in enabled drivers build config 00:02:06.482 dma/skeleton: not in enabled drivers build config 00:02:06.482 net/af_packet: not in enabled drivers build config 00:02:06.482 net/af_xdp: not in enabled drivers build config 00:02:06.482 net/ark: not in enabled drivers build config 00:02:06.482 net/atlantic: not in enabled drivers build config 00:02:06.482 net/avp: not in enabled drivers build config 00:02:06.482 net/axgbe: not in enabled drivers build config 00:02:06.482 net/bnx2x: not in enabled drivers build config 00:02:06.482 net/bnxt: not in enabled drivers build config 00:02:06.482 net/bonding: not in enabled drivers build config 00:02:06.482 net/cnxk: not in enabled drivers build config 00:02:06.482 net/cpfl: not in enabled drivers build config 00:02:06.482 net/cxgbe: not in enabled drivers build config 00:02:06.482 net/dpaa: not in enabled drivers build config 00:02:06.482 net/dpaa2: not in enabled drivers build config 00:02:06.482 net/e1000: not in enabled drivers build config 00:02:06.482 net/ena: not in enabled drivers build config 00:02:06.482 net/enetc: not in enabled drivers build config 00:02:06.482 net/enetfec: not in enabled drivers build config 00:02:06.482 net/enic: not in enabled drivers build config 00:02:06.482 net/failsafe: not in enabled drivers build config 00:02:06.482 net/fm10k: not in enabled drivers build config 00:02:06.482 net/gve: not in enabled drivers build config 00:02:06.482 net/hinic: not in enabled drivers build config 00:02:06.482 net/hns3: not in enabled drivers build config 00:02:06.482 net/i40e: not in enabled drivers build config 00:02:06.482 net/iavf: not in enabled drivers build config 00:02:06.482 net/ice: not in enabled drivers build config 00:02:06.482 net/idpf: not in enabled drivers build config 00:02:06.482 net/igc: not in enabled drivers build config 00:02:06.482 net/ionic: not in enabled drivers build config 00:02:06.482 net/ipn3ke: not in enabled drivers build config 00:02:06.482 net/ixgbe: not in enabled drivers build config 00:02:06.482 net/mana: not in enabled drivers build config 00:02:06.482 net/memif: not in enabled drivers build config 00:02:06.482 net/mlx4: not in enabled drivers build config 00:02:06.482 net/mlx5: not in enabled drivers build config 00:02:06.482 net/mvneta: not in enabled drivers build config 00:02:06.482 net/mvpp2: not in enabled drivers build config 00:02:06.482 net/netvsc: not in enabled drivers build config 00:02:06.482 net/nfb: not in enabled drivers build config 00:02:06.482 net/nfp: not in enabled drivers build config 00:02:06.482 net/ngbe: not in enabled drivers build config 00:02:06.482 net/null: not in enabled drivers build config 00:02:06.482 net/octeontx: not in enabled drivers build config 00:02:06.482 net/octeon_ep: not in enabled drivers build config 00:02:06.482 net/pcap: not in enabled drivers build config 00:02:06.482 net/pfe: not in enabled drivers build config 00:02:06.482 net/qede: not in enabled drivers build config 00:02:06.482 net/ring: not in enabled drivers build config 00:02:06.482 net/sfc: not in enabled drivers build config 00:02:06.482 net/softnic: not in enabled drivers build config 00:02:06.482 net/tap: not in enabled drivers build config 00:02:06.482 net/thunderx: not in enabled drivers build config 00:02:06.482 net/txgbe: not in enabled drivers build config 00:02:06.482 net/vdev_netvsc: not in enabled drivers build config 00:02:06.482 net/vhost: not in enabled drivers build config 00:02:06.482 net/virtio: not in enabled drivers build config 00:02:06.482 net/vmxnet3: not in enabled drivers build config 00:02:06.482 raw/*: missing internal dependency, "rawdev" 00:02:06.482 crypto/armv8: not in enabled drivers build config 00:02:06.482 crypto/bcmfs: not in enabled drivers build config 00:02:06.482 crypto/caam_jr: not in enabled drivers build config 00:02:06.482 crypto/ccp: not in enabled drivers build config 00:02:06.482 crypto/cnxk: not in enabled drivers build config 00:02:06.482 crypto/dpaa_sec: not in enabled drivers build config 00:02:06.482 crypto/dpaa2_sec: not in enabled drivers build config 00:02:06.482 crypto/ipsec_mb: not in enabled drivers build config 00:02:06.482 crypto/mlx5: not in enabled drivers build config 00:02:06.482 crypto/mvsam: not in enabled drivers build config 00:02:06.482 crypto/nitrox: not in enabled drivers build config 00:02:06.482 crypto/null: not in enabled drivers build config 00:02:06.482 crypto/octeontx: not in enabled drivers build config 00:02:06.482 crypto/openssl: not in enabled drivers build config 00:02:06.482 crypto/scheduler: not in enabled drivers build config 00:02:06.482 crypto/uadk: not in enabled drivers build config 00:02:06.482 crypto/virtio: not in enabled drivers build config 00:02:06.482 compress/isal: not in enabled drivers build config 00:02:06.482 compress/mlx5: not in enabled drivers build config 00:02:06.482 compress/nitrox: not in enabled drivers build config 00:02:06.482 compress/octeontx: not in enabled drivers build config 00:02:06.482 compress/zlib: not in enabled drivers build config 00:02:06.482 regex/*: missing internal dependency, "regexdev" 00:02:06.482 ml/*: missing internal dependency, "mldev" 00:02:06.482 vdpa/ifc: not in enabled drivers build config 00:02:06.482 vdpa/mlx5: not in enabled drivers build config 00:02:06.482 vdpa/nfp: not in enabled drivers build config 00:02:06.482 vdpa/sfc: not in enabled drivers build config 00:02:06.482 event/*: missing internal dependency, "eventdev" 00:02:06.482 baseband/*: missing internal dependency, "bbdev" 00:02:06.482 gpu/*: missing internal dependency, "gpudev" 00:02:06.482 00:02:06.482 00:02:06.482 Build targets in project: 85 00:02:06.482 00:02:06.482 DPDK 24.03.0 00:02:06.482 00:02:06.482 User defined options 00:02:06.482 buildtype : debug 00:02:06.482 default_library : shared 00:02:06.482 libdir : lib 00:02:06.482 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:06.482 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:06.482 c_link_args : 00:02:06.482 cpu_instruction_set: native 00:02:06.482 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:06.482 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:06.482 enable_docs : false 00:02:06.482 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:06.482 enable_kmods : false 00:02:06.482 max_lcores : 128 00:02:06.482 tests : false 00:02:06.482 00:02:06.482 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:06.482 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:06.482 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:06.741 [2/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:06.741 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:06.741 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:06.741 [5/268] Linking static target lib/librte_kvargs.a 00:02:06.741 [6/268] Linking static target lib/librte_log.a 00:02:07.307 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.307 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:07.307 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:07.307 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:07.307 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:07.307 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:07.565 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:07.565 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:07.565 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:07.565 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:07.565 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:07.565 [18/268] Linking static target lib/librte_telemetry.a 00:02:07.823 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.823 [20/268] Linking target lib/librte_log.so.24.1 00:02:08.081 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:08.081 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:08.340 [23/268] Linking target lib/librte_kvargs.so.24.1 00:02:08.340 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:08.340 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:08.340 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:08.340 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:08.340 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:08.340 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:08.598 [30/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:08.598 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:08.598 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.598 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:08.598 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:08.857 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:08.857 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:08.857 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:09.114 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:09.114 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:09.114 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:09.372 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:09.372 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:09.372 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:09.372 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:09.372 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:09.631 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:09.631 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:09.631 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:09.891 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:09.891 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:09.891 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:10.151 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:10.151 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:10.151 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:10.719 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:10.719 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:10.719 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:10.720 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:10.720 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:10.720 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:10.720 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:10.720 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:10.978 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:10.978 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:11.237 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:11.237 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:11.237 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:11.497 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:11.757 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:11.757 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:11.757 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:12.016 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:12.016 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:12.016 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:12.016 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:12.016 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:12.016 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:12.275 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:12.275 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:12.275 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:12.275 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:12.534 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:12.793 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:12.793 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:12.793 [85/268] Linking static target lib/librte_ring.a 00:02:12.793 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:12.793 [87/268] Linking static target lib/librte_eal.a 00:02:13.052 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:13.052 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:13.052 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:13.052 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:13.052 [92/268] Linking static target lib/librte_rcu.a 00:02:13.311 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:13.311 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.311 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:13.311 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:13.311 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:13.311 [98/268] Linking static target lib/librte_mempool.a 00:02:13.590 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:13.590 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:13.590 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:13.590 [102/268] Linking static target lib/librte_mbuf.a 00:02:13.590 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.849 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:13.849 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:13.849 [106/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:14.108 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:14.108 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:14.108 [109/268] Linking static target lib/librte_net.a 00:02:14.108 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:14.366 [111/268] Linking static target lib/librte_meter.a 00:02:14.623 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:14.623 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:14.623 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.623 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:14.623 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.623 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.623 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:14.881 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.139 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:15.398 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:15.398 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:15.398 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:15.657 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:15.657 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:15.657 [126/268] Linking static target lib/librte_pci.a 00:02:15.916 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:15.916 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:15.916 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:15.916 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:16.175 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:16.175 [132/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.175 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:16.175 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:16.175 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:16.175 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:16.175 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:16.175 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:16.175 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:16.175 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:16.175 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:16.175 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:16.175 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:16.175 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:16.433 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:16.691 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:16.691 [147/268] Linking static target lib/librte_ethdev.a 00:02:16.691 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:16.691 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:16.954 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:16.954 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:16.954 [152/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:17.213 [153/268] Linking static target lib/librte_cmdline.a 00:02:17.213 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:17.213 [155/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:17.213 [156/268] Linking static target lib/librte_timer.a 00:02:17.213 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:17.471 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:17.741 [159/268] Linking static target lib/librte_hash.a 00:02:17.741 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:17.741 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:17.741 [162/268] Linking static target lib/librte_compressdev.a 00:02:17.741 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:17.741 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:17.741 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.029 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:18.029 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:18.288 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:18.288 [169/268] Linking static target lib/librte_dmadev.a 00:02:18.547 [170/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:18.547 [171/268] Linking static target lib/librte_cryptodev.a 00:02:18.547 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:18.547 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:18.547 [174/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:18.547 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.806 [176/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.806 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:18.806 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.063 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:19.063 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.063 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:19.321 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:19.321 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:19.321 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:19.579 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:19.579 [186/268] Linking static target lib/librte_power.a 00:02:19.579 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:19.579 [188/268] Linking static target lib/librte_reorder.a 00:02:19.836 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:19.836 [190/268] Linking static target lib/librte_security.a 00:02:20.095 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:20.095 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:20.095 [193/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.355 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:20.355 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:20.928 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.928 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.928 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:20.928 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:20.928 [200/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.186 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:21.186 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:21.443 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:21.443 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:21.701 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:21.701 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:21.701 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:21.701 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:21.967 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:21.967 [210/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:21.967 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:21.967 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:21.967 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:21.967 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:21.967 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:21.967 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:22.242 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:22.242 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:22.242 [219/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:22.242 [220/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:22.242 [221/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:22.242 [222/268] Linking static target drivers/librte_bus_pci.a 00:02:22.501 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.501 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:22.501 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.501 [226/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.501 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:22.759 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.326 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:23.326 [230/268] Linking static target lib/librte_vhost.a 00:02:24.261 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.261 [232/268] Linking target lib/librte_eal.so.24.1 00:02:24.261 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:24.261 [234/268] Linking target lib/librte_meter.so.24.1 00:02:24.261 [235/268] Linking target lib/librte_pci.so.24.1 00:02:24.261 [236/268] Linking target lib/librte_ring.so.24.1 00:02:24.519 [237/268] Linking target lib/librte_timer.so.24.1 00:02:24.519 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:24.519 [239/268] Linking target lib/librte_dmadev.so.24.1 00:02:24.519 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:24.519 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:24.519 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:24.519 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:24.519 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:24.519 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:24.519 [246/268] Linking target lib/librte_mempool.so.24.1 00:02:24.519 [247/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.519 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:24.519 [249/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.777 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:24.777 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:24.777 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:24.777 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:25.037 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:25.037 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:25.037 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:25.037 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:02:25.037 [258/268] Linking target lib/librte_net.so.24.1 00:02:25.037 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:25.037 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:25.037 [261/268] Linking target lib/librte_hash.so.24.1 00:02:25.296 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:25.296 [263/268] Linking target lib/librte_security.so.24.1 00:02:25.296 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:25.296 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:25.296 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:25.296 [267/268] Linking target lib/librte_power.so.24.1 00:02:25.296 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:25.296 INFO: autodetecting backend as ninja 00:02:25.296 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:51.879 CC lib/ut_mock/mock.o 00:02:51.879 CC lib/log/log.o 00:02:51.879 CC lib/log/log_flags.o 00:02:51.879 CC lib/ut/ut.o 00:02:51.879 CC lib/log/log_deprecated.o 00:02:51.879 LIB libspdk_ut.a 00:02:51.879 LIB libspdk_ut_mock.a 00:02:51.879 SO libspdk_ut.so.2.0 00:02:51.879 SO libspdk_ut_mock.so.6.0 00:02:51.879 LIB libspdk_log.a 00:02:51.879 SO libspdk_log.so.7.1 00:02:51.879 SYMLINK libspdk_ut.so 00:02:51.879 SYMLINK libspdk_ut_mock.so 00:02:51.879 SYMLINK libspdk_log.so 00:02:51.879 CC lib/ioat/ioat.o 00:02:51.879 CC lib/util/base64.o 00:02:51.879 CC lib/util/bit_array.o 00:02:51.879 CC lib/util/cpuset.o 00:02:51.879 CC lib/util/crc16.o 00:02:51.879 CC lib/util/crc32.o 00:02:51.879 CC lib/util/crc32c.o 00:02:51.879 CC lib/dma/dma.o 00:02:51.879 CXX lib/trace_parser/trace.o 00:02:51.879 CC lib/vfio_user/host/vfio_user_pci.o 00:02:51.879 CC lib/util/crc32_ieee.o 00:02:51.879 CC lib/util/crc64.o 00:02:51.879 CC lib/util/dif.o 00:02:51.879 CC lib/util/fd.o 00:02:52.138 CC lib/util/fd_group.o 00:02:52.138 LIB libspdk_dma.a 00:02:52.138 CC lib/util/file.o 00:02:52.138 SO libspdk_dma.so.5.0 00:02:52.138 LIB libspdk_ioat.a 00:02:52.138 CC lib/vfio_user/host/vfio_user.o 00:02:52.138 SO libspdk_ioat.so.7.0 00:02:52.138 CC lib/util/hexlify.o 00:02:52.138 CC lib/util/iov.o 00:02:52.138 SYMLINK libspdk_dma.so 00:02:52.138 CC lib/util/math.o 00:02:52.138 SYMLINK libspdk_ioat.so 00:02:52.138 CC lib/util/net.o 00:02:52.138 CC lib/util/pipe.o 00:02:52.470 CC lib/util/strerror_tls.o 00:02:52.470 CC lib/util/string.o 00:02:52.470 CC lib/util/uuid.o 00:02:52.470 LIB libspdk_vfio_user.a 00:02:52.470 CC lib/util/xor.o 00:02:52.470 CC lib/util/zipf.o 00:02:52.470 SO libspdk_vfio_user.so.5.0 00:02:52.470 CC lib/util/md5.o 00:02:52.470 SYMLINK libspdk_vfio_user.so 00:02:52.728 LIB libspdk_util.a 00:02:52.729 SO libspdk_util.so.10.1 00:02:52.987 LIB libspdk_trace_parser.a 00:02:52.987 SO libspdk_trace_parser.so.6.0 00:02:52.987 SYMLINK libspdk_util.so 00:02:52.987 SYMLINK libspdk_trace_parser.so 00:02:53.245 CC lib/vmd/vmd.o 00:02:53.245 CC lib/vmd/led.o 00:02:53.245 CC lib/json/json_parse.o 00:02:53.245 CC lib/json/json_util.o 00:02:53.245 CC lib/json/json_write.o 00:02:53.246 CC lib/conf/conf.o 00:02:53.246 CC lib/idxd/idxd.o 00:02:53.246 CC lib/idxd/idxd_user.o 00:02:53.246 CC lib/rdma_utils/rdma_utils.o 00:02:53.246 CC lib/env_dpdk/env.o 00:02:53.246 CC lib/env_dpdk/memory.o 00:02:53.505 CC lib/env_dpdk/pci.o 00:02:53.505 CC lib/env_dpdk/init.o 00:02:53.505 LIB libspdk_conf.a 00:02:53.505 LIB libspdk_rdma_utils.a 00:02:53.505 LIB libspdk_json.a 00:02:53.505 CC lib/env_dpdk/threads.o 00:02:53.505 SO libspdk_conf.so.6.0 00:02:53.505 SO libspdk_rdma_utils.so.1.0 00:02:53.505 SO libspdk_json.so.6.0 00:02:53.505 SYMLINK libspdk_conf.so 00:02:53.505 CC lib/idxd/idxd_kernel.o 00:02:53.505 SYMLINK libspdk_json.so 00:02:53.505 SYMLINK libspdk_rdma_utils.so 00:02:53.505 CC lib/env_dpdk/pci_ioat.o 00:02:53.505 CC lib/env_dpdk/pci_virtio.o 00:02:53.764 CC lib/env_dpdk/pci_vmd.o 00:02:53.764 LIB libspdk_idxd.a 00:02:53.764 CC lib/env_dpdk/pci_idxd.o 00:02:53.764 SO libspdk_idxd.so.12.1 00:02:53.764 CC lib/env_dpdk/pci_event.o 00:02:53.764 CC lib/env_dpdk/sigbus_handler.o 00:02:53.764 CC lib/env_dpdk/pci_dpdk.o 00:02:53.764 LIB libspdk_vmd.a 00:02:53.764 SYMLINK libspdk_idxd.so 00:02:53.764 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:54.023 SO libspdk_vmd.so.6.0 00:02:54.023 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:54.023 CC lib/jsonrpc/jsonrpc_server.o 00:02:54.023 CC lib/rdma_provider/common.o 00:02:54.023 SYMLINK libspdk_vmd.so 00:02:54.023 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:54.023 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:54.023 CC lib/jsonrpc/jsonrpc_client.o 00:02:54.023 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:54.313 LIB libspdk_rdma_provider.a 00:02:54.313 LIB libspdk_jsonrpc.a 00:02:54.313 SO libspdk_rdma_provider.so.7.0 00:02:54.313 SO libspdk_jsonrpc.so.6.0 00:02:54.313 SYMLINK libspdk_rdma_provider.so 00:02:54.313 SYMLINK libspdk_jsonrpc.so 00:02:54.571 CC lib/rpc/rpc.o 00:02:54.571 LIB libspdk_env_dpdk.a 00:02:54.830 SO libspdk_env_dpdk.so.15.1 00:02:54.830 LIB libspdk_rpc.a 00:02:54.830 SO libspdk_rpc.so.6.0 00:02:54.830 SYMLINK libspdk_env_dpdk.so 00:02:54.830 SYMLINK libspdk_rpc.so 00:02:55.088 CC lib/trace/trace.o 00:02:55.088 CC lib/trace/trace_flags.o 00:02:55.088 CC lib/trace/trace_rpc.o 00:02:55.088 CC lib/notify/notify.o 00:02:55.088 CC lib/notify/notify_rpc.o 00:02:55.088 CC lib/keyring/keyring.o 00:02:55.088 CC lib/keyring/keyring_rpc.o 00:02:55.346 LIB libspdk_notify.a 00:02:55.346 LIB libspdk_keyring.a 00:02:55.346 SO libspdk_notify.so.6.0 00:02:55.346 SO libspdk_keyring.so.2.0 00:02:55.604 LIB libspdk_trace.a 00:02:55.604 SYMLINK libspdk_notify.so 00:02:55.604 SO libspdk_trace.so.11.0 00:02:55.604 SYMLINK libspdk_keyring.so 00:02:55.604 SYMLINK libspdk_trace.so 00:02:55.905 CC lib/thread/thread.o 00:02:55.905 CC lib/thread/iobuf.o 00:02:55.905 CC lib/sock/sock.o 00:02:55.905 CC lib/sock/sock_rpc.o 00:02:56.471 LIB libspdk_sock.a 00:02:56.471 SO libspdk_sock.so.10.0 00:02:56.471 SYMLINK libspdk_sock.so 00:02:56.728 CC lib/nvme/nvme_ctrlr.o 00:02:56.728 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:56.728 CC lib/nvme/nvme_fabric.o 00:02:56.728 CC lib/nvme/nvme_ns_cmd.o 00:02:56.728 CC lib/nvme/nvme_ns.o 00:02:56.728 CC lib/nvme/nvme_pcie_common.o 00:02:56.728 CC lib/nvme/nvme_pcie.o 00:02:56.728 CC lib/nvme/nvme.o 00:02:56.728 CC lib/nvme/nvme_qpair.o 00:02:57.733 LIB libspdk_thread.a 00:02:57.733 CC lib/nvme/nvme_quirks.o 00:02:57.733 SO libspdk_thread.so.11.0 00:02:57.733 SYMLINK libspdk_thread.so 00:02:57.733 CC lib/nvme/nvme_transport.o 00:02:57.733 CC lib/nvme/nvme_discovery.o 00:02:57.733 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:57.733 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:57.733 CC lib/nvme/nvme_tcp.o 00:02:57.991 CC lib/nvme/nvme_opal.o 00:02:57.991 CC lib/accel/accel.o 00:02:57.991 CC lib/blob/blobstore.o 00:02:58.250 CC lib/nvme/nvme_io_msg.o 00:02:58.250 CC lib/accel/accel_rpc.o 00:02:58.250 CC lib/accel/accel_sw.o 00:02:58.250 CC lib/nvme/nvme_poll_group.o 00:02:58.509 CC lib/nvme/nvme_zns.o 00:02:58.509 CC lib/nvme/nvme_stubs.o 00:02:58.509 CC lib/nvme/nvme_auth.o 00:02:58.509 CC lib/nvme/nvme_cuse.o 00:02:58.767 CC lib/nvme/nvme_rdma.o 00:02:59.040 CC lib/blob/request.o 00:02:59.040 LIB libspdk_accel.a 00:02:59.040 SO libspdk_accel.so.16.0 00:02:59.301 SYMLINK libspdk_accel.so 00:02:59.301 CC lib/blob/zeroes.o 00:02:59.301 CC lib/blob/blob_bs_dev.o 00:02:59.301 CC lib/init/json_config.o 00:02:59.301 CC lib/virtio/virtio.o 00:02:59.301 CC lib/virtio/virtio_vhost_user.o 00:02:59.560 CC lib/virtio/virtio_vfio_user.o 00:02:59.560 CC lib/init/subsystem.o 00:02:59.560 CC lib/fsdev/fsdev.o 00:02:59.560 CC lib/init/subsystem_rpc.o 00:02:59.560 CC lib/bdev/bdev.o 00:02:59.560 CC lib/bdev/bdev_rpc.o 00:02:59.560 CC lib/bdev/bdev_zone.o 00:02:59.560 CC lib/bdev/part.o 00:02:59.560 CC lib/bdev/scsi_nvme.o 00:02:59.560 CC lib/init/rpc.o 00:02:59.819 CC lib/virtio/virtio_pci.o 00:02:59.819 CC lib/fsdev/fsdev_io.o 00:02:59.819 LIB libspdk_init.a 00:02:59.819 CC lib/fsdev/fsdev_rpc.o 00:02:59.819 SO libspdk_init.so.6.0 00:03:00.079 SYMLINK libspdk_init.so 00:03:00.079 LIB libspdk_virtio.a 00:03:00.079 SO libspdk_virtio.so.7.0 00:03:00.079 SYMLINK libspdk_virtio.so 00:03:00.079 CC lib/event/reactor.o 00:03:00.079 CC lib/event/app.o 00:03:00.079 CC lib/event/app_rpc.o 00:03:00.079 CC lib/event/scheduler_static.o 00:03:00.079 CC lib/event/log_rpc.o 00:03:00.079 LIB libspdk_nvme.a 00:03:00.337 LIB libspdk_fsdev.a 00:03:00.337 SO libspdk_fsdev.so.2.0 00:03:00.337 SYMLINK libspdk_fsdev.so 00:03:00.337 SO libspdk_nvme.so.15.0 00:03:00.596 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:00.596 LIB libspdk_event.a 00:03:00.596 SYMLINK libspdk_nvme.so 00:03:00.596 SO libspdk_event.so.14.0 00:03:00.854 SYMLINK libspdk_event.so 00:03:01.113 LIB libspdk_blob.a 00:03:01.113 LIB libspdk_fuse_dispatcher.a 00:03:01.373 SO libspdk_blob.so.12.0 00:03:01.373 SO libspdk_fuse_dispatcher.so.1.0 00:03:01.373 SYMLINK libspdk_fuse_dispatcher.so 00:03:01.373 SYMLINK libspdk_blob.so 00:03:01.632 CC lib/blobfs/tree.o 00:03:01.632 CC lib/blobfs/blobfs.o 00:03:01.632 CC lib/lvol/lvol.o 00:03:02.577 LIB libspdk_bdev.a 00:03:02.577 SO libspdk_bdev.so.17.0 00:03:02.577 LIB libspdk_lvol.a 00:03:02.577 LIB libspdk_blobfs.a 00:03:02.577 SO libspdk_lvol.so.11.0 00:03:02.577 SO libspdk_blobfs.so.11.0 00:03:02.577 SYMLINK libspdk_bdev.so 00:03:02.577 SYMLINK libspdk_blobfs.so 00:03:02.577 SYMLINK libspdk_lvol.so 00:03:02.836 CC lib/nbd/nbd.o 00:03:02.836 CC lib/nbd/nbd_rpc.o 00:03:02.836 CC lib/scsi/dev.o 00:03:02.836 CC lib/scsi/lun.o 00:03:02.836 CC lib/scsi/scsi.o 00:03:02.836 CC lib/scsi/port.o 00:03:02.836 CC lib/scsi/scsi_bdev.o 00:03:02.836 CC lib/nvmf/ctrlr.o 00:03:02.836 CC lib/ublk/ublk.o 00:03:02.836 CC lib/ftl/ftl_core.o 00:03:03.094 CC lib/ftl/ftl_init.o 00:03:03.094 CC lib/ftl/ftl_layout.o 00:03:03.094 CC lib/ublk/ublk_rpc.o 00:03:03.094 CC lib/scsi/scsi_pr.o 00:03:03.094 CC lib/nvmf/ctrlr_discovery.o 00:03:03.094 CC lib/ftl/ftl_debug.o 00:03:03.353 CC lib/scsi/scsi_rpc.o 00:03:03.353 CC lib/ftl/ftl_io.o 00:03:03.353 LIB libspdk_nbd.a 00:03:03.353 CC lib/scsi/task.o 00:03:03.353 SO libspdk_nbd.so.7.0 00:03:03.353 CC lib/nvmf/ctrlr_bdev.o 00:03:03.353 CC lib/ftl/ftl_sb.o 00:03:03.353 SYMLINK libspdk_nbd.so 00:03:03.353 CC lib/nvmf/subsystem.o 00:03:03.353 CC lib/ftl/ftl_l2p.o 00:03:03.353 CC lib/ftl/ftl_l2p_flat.o 00:03:03.611 LIB libspdk_ublk.a 00:03:03.611 SO libspdk_ublk.so.3.0 00:03:03.611 LIB libspdk_scsi.a 00:03:03.611 SYMLINK libspdk_ublk.so 00:03:03.611 CC lib/nvmf/nvmf.o 00:03:03.611 CC lib/nvmf/nvmf_rpc.o 00:03:03.611 CC lib/nvmf/transport.o 00:03:03.611 SO libspdk_scsi.so.9.0 00:03:03.611 CC lib/ftl/ftl_nv_cache.o 00:03:03.611 CC lib/ftl/ftl_band.o 00:03:03.611 CC lib/nvmf/tcp.o 00:03:03.973 SYMLINK libspdk_scsi.so 00:03:03.973 CC lib/ftl/ftl_band_ops.o 00:03:03.973 CC lib/nvmf/stubs.o 00:03:04.242 CC lib/nvmf/mdns_server.o 00:03:04.242 CC lib/nvmf/rdma.o 00:03:04.501 CC lib/nvmf/auth.o 00:03:04.501 CC lib/ftl/ftl_writer.o 00:03:04.501 CC lib/ftl/ftl_rq.o 00:03:04.501 CC lib/ftl/ftl_reloc.o 00:03:04.501 CC lib/iscsi/conn.o 00:03:04.761 CC lib/ftl/ftl_l2p_cache.o 00:03:04.761 CC lib/vhost/vhost.o 00:03:04.761 CC lib/vhost/vhost_rpc.o 00:03:04.761 CC lib/vhost/vhost_scsi.o 00:03:04.761 CC lib/iscsi/init_grp.o 00:03:05.019 CC lib/ftl/ftl_p2l.o 00:03:05.019 CC lib/iscsi/iscsi.o 00:03:05.279 CC lib/vhost/vhost_blk.o 00:03:05.279 CC lib/iscsi/param.o 00:03:05.279 CC lib/iscsi/portal_grp.o 00:03:05.279 CC lib/ftl/ftl_p2l_log.o 00:03:05.279 CC lib/iscsi/tgt_node.o 00:03:05.538 CC lib/iscsi/iscsi_subsystem.o 00:03:05.538 CC lib/iscsi/iscsi_rpc.o 00:03:05.538 CC lib/ftl/mngt/ftl_mngt.o 00:03:05.538 CC lib/iscsi/task.o 00:03:05.796 CC lib/vhost/rte_vhost_user.o 00:03:05.796 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:05.796 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:05.796 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:05.796 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:05.796 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:06.084 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:06.084 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:06.084 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:06.084 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:06.084 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:06.084 LIB libspdk_nvmf.a 00:03:06.084 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:06.343 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:06.343 CC lib/ftl/utils/ftl_conf.o 00:03:06.343 CC lib/ftl/utils/ftl_md.o 00:03:06.343 SO libspdk_nvmf.so.20.0 00:03:06.343 CC lib/ftl/utils/ftl_mempool.o 00:03:06.343 CC lib/ftl/utils/ftl_bitmap.o 00:03:06.343 CC lib/ftl/utils/ftl_property.o 00:03:06.343 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:06.343 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:06.601 SYMLINK libspdk_nvmf.so 00:03:06.601 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:06.601 LIB libspdk_iscsi.a 00:03:06.601 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:06.601 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:06.601 SO libspdk_iscsi.so.8.0 00:03:06.601 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:06.601 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:06.601 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:06.602 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:06.860 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:06.860 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:06.860 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:06.860 SYMLINK libspdk_iscsi.so 00:03:06.860 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:06.860 LIB libspdk_vhost.a 00:03:06.860 CC lib/ftl/base/ftl_base_dev.o 00:03:06.860 SO libspdk_vhost.so.8.0 00:03:06.860 CC lib/ftl/base/ftl_base_bdev.o 00:03:06.860 CC lib/ftl/ftl_trace.o 00:03:06.860 SYMLINK libspdk_vhost.so 00:03:07.119 LIB libspdk_ftl.a 00:03:07.377 SO libspdk_ftl.so.9.0 00:03:07.636 SYMLINK libspdk_ftl.so 00:03:08.203 CC module/env_dpdk/env_dpdk_rpc.o 00:03:08.203 CC module/accel/error/accel_error.o 00:03:08.203 CC module/accel/ioat/accel_ioat.o 00:03:08.203 CC module/blob/bdev/blob_bdev.o 00:03:08.203 CC module/keyring/file/keyring.o 00:03:08.203 CC module/keyring/linux/keyring.o 00:03:08.203 CC module/sock/posix/posix.o 00:03:08.203 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:08.203 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:08.203 CC module/fsdev/aio/fsdev_aio.o 00:03:08.203 LIB libspdk_env_dpdk_rpc.a 00:03:08.203 SO libspdk_env_dpdk_rpc.so.6.0 00:03:08.203 SYMLINK libspdk_env_dpdk_rpc.so 00:03:08.203 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:08.203 CC module/keyring/file/keyring_rpc.o 00:03:08.203 CC module/keyring/linux/keyring_rpc.o 00:03:08.203 LIB libspdk_scheduler_dpdk_governor.a 00:03:08.461 CC module/accel/ioat/accel_ioat_rpc.o 00:03:08.461 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:08.461 LIB libspdk_scheduler_dynamic.a 00:03:08.461 CC module/accel/error/accel_error_rpc.o 00:03:08.461 SO libspdk_scheduler_dynamic.so.4.0 00:03:08.461 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:08.461 CC module/fsdev/aio/linux_aio_mgr.o 00:03:08.461 LIB libspdk_keyring_linux.a 00:03:08.461 SYMLINK libspdk_scheduler_dynamic.so 00:03:08.461 LIB libspdk_keyring_file.a 00:03:08.461 LIB libspdk_blob_bdev.a 00:03:08.461 SO libspdk_keyring_linux.so.1.0 00:03:08.461 SO libspdk_keyring_file.so.2.0 00:03:08.461 SO libspdk_blob_bdev.so.12.0 00:03:08.461 LIB libspdk_accel_ioat.a 00:03:08.461 LIB libspdk_accel_error.a 00:03:08.461 SO libspdk_accel_ioat.so.6.0 00:03:08.461 SO libspdk_accel_error.so.2.0 00:03:08.461 SYMLINK libspdk_keyring_file.so 00:03:08.461 SYMLINK libspdk_keyring_linux.so 00:03:08.719 SYMLINK libspdk_blob_bdev.so 00:03:08.719 SYMLINK libspdk_accel_ioat.so 00:03:08.719 SYMLINK libspdk_accel_error.so 00:03:08.719 CC module/sock/uring/uring.o 00:03:08.719 CC module/scheduler/gscheduler/gscheduler.o 00:03:08.719 CC module/accel/dsa/accel_dsa.o 00:03:08.719 CC module/accel/iaa/accel_iaa.o 00:03:08.719 LIB libspdk_scheduler_gscheduler.a 00:03:08.978 SO libspdk_scheduler_gscheduler.so.4.0 00:03:08.978 CC module/bdev/gpt/gpt.o 00:03:08.978 CC module/bdev/delay/vbdev_delay.o 00:03:08.978 LIB libspdk_sock_posix.a 00:03:08.978 CC module/bdev/error/vbdev_error.o 00:03:08.978 CC module/blobfs/bdev/blobfs_bdev.o 00:03:08.978 LIB libspdk_fsdev_aio.a 00:03:08.978 SO libspdk_sock_posix.so.6.0 00:03:08.978 SYMLINK libspdk_scheduler_gscheduler.so 00:03:08.978 CC module/bdev/error/vbdev_error_rpc.o 00:03:08.978 SO libspdk_fsdev_aio.so.1.0 00:03:08.978 CC module/accel/iaa/accel_iaa_rpc.o 00:03:08.978 SYMLINK libspdk_fsdev_aio.so 00:03:08.978 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:08.978 SYMLINK libspdk_sock_posix.so 00:03:08.978 CC module/accel/dsa/accel_dsa_rpc.o 00:03:09.236 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:09.236 CC module/bdev/gpt/vbdev_gpt.o 00:03:09.236 LIB libspdk_accel_iaa.a 00:03:09.236 SO libspdk_accel_iaa.so.3.0 00:03:09.236 LIB libspdk_bdev_error.a 00:03:09.236 LIB libspdk_accel_dsa.a 00:03:09.236 LIB libspdk_blobfs_bdev.a 00:03:09.236 SO libspdk_bdev_error.so.6.0 00:03:09.236 SO libspdk_accel_dsa.so.5.0 00:03:09.236 SO libspdk_blobfs_bdev.so.6.0 00:03:09.236 SYMLINK libspdk_accel_iaa.so 00:03:09.236 SYMLINK libspdk_bdev_error.so 00:03:09.236 LIB libspdk_bdev_delay.a 00:03:09.236 CC module/bdev/lvol/vbdev_lvol.o 00:03:09.236 SYMLINK libspdk_blobfs_bdev.so 00:03:09.236 SYMLINK libspdk_accel_dsa.so 00:03:09.236 CC module/bdev/malloc/bdev_malloc.o 00:03:09.236 SO libspdk_bdev_delay.so.6.0 00:03:09.495 LIB libspdk_sock_uring.a 00:03:09.495 LIB libspdk_bdev_gpt.a 00:03:09.495 SYMLINK libspdk_bdev_delay.so 00:03:09.495 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:09.495 SO libspdk_sock_uring.so.5.0 00:03:09.495 SO libspdk_bdev_gpt.so.6.0 00:03:09.495 CC module/bdev/null/bdev_null.o 00:03:09.495 CC module/bdev/nvme/bdev_nvme.o 00:03:09.495 CC module/bdev/passthru/vbdev_passthru.o 00:03:09.495 SYMLINK libspdk_sock_uring.so 00:03:09.495 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:09.495 CC module/bdev/raid/bdev_raid.o 00:03:09.495 CC module/bdev/split/vbdev_split.o 00:03:09.495 SYMLINK libspdk_bdev_gpt.so 00:03:09.495 CC module/bdev/split/vbdev_split_rpc.o 00:03:09.754 CC module/bdev/null/bdev_null_rpc.o 00:03:09.754 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:09.754 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:09.754 LIB libspdk_bdev_malloc.a 00:03:09.754 CC module/bdev/raid/bdev_raid_rpc.o 00:03:09.754 SO libspdk_bdev_malloc.so.6.0 00:03:09.754 LIB libspdk_bdev_passthru.a 00:03:09.754 LIB libspdk_bdev_split.a 00:03:09.754 SYMLINK libspdk_bdev_malloc.so 00:03:09.754 SO libspdk_bdev_passthru.so.6.0 00:03:09.754 SO libspdk_bdev_split.so.6.0 00:03:09.754 LIB libspdk_bdev_null.a 00:03:10.012 SO libspdk_bdev_null.so.6.0 00:03:10.012 SYMLINK libspdk_bdev_split.so 00:03:10.012 SYMLINK libspdk_bdev_passthru.so 00:03:10.012 SYMLINK libspdk_bdev_null.so 00:03:10.012 CC module/bdev/raid/bdev_raid_sb.o 00:03:10.012 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:10.012 LIB libspdk_bdev_lvol.a 00:03:10.012 CC module/bdev/uring/bdev_uring.o 00:03:10.012 CC module/bdev/aio/bdev_aio.o 00:03:10.012 SO libspdk_bdev_lvol.so.6.0 00:03:10.012 CC module/bdev/ftl/bdev_ftl.o 00:03:10.270 CC module/bdev/iscsi/bdev_iscsi.o 00:03:10.270 SYMLINK libspdk_bdev_lvol.so 00:03:10.270 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:10.270 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:10.270 CC module/bdev/nvme/nvme_rpc.o 00:03:10.270 CC module/bdev/aio/bdev_aio_rpc.o 00:03:10.571 CC module/bdev/uring/bdev_uring_rpc.o 00:03:10.571 CC module/bdev/nvme/bdev_mdns_client.o 00:03:10.571 LIB libspdk_bdev_ftl.a 00:03:10.571 LIB libspdk_bdev_zone_block.a 00:03:10.571 SO libspdk_bdev_ftl.so.6.0 00:03:10.571 SO libspdk_bdev_zone_block.so.6.0 00:03:10.571 CC module/bdev/raid/raid0.o 00:03:10.571 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:10.571 LIB libspdk_bdev_aio.a 00:03:10.571 SYMLINK libspdk_bdev_ftl.so 00:03:10.571 CC module/bdev/nvme/vbdev_opal.o 00:03:10.571 SO libspdk_bdev_aio.so.6.0 00:03:10.571 CC module/bdev/raid/raid1.o 00:03:10.571 SYMLINK libspdk_bdev_zone_block.so 00:03:10.571 LIB libspdk_bdev_uring.a 00:03:10.571 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:10.571 SO libspdk_bdev_uring.so.6.0 00:03:10.571 SYMLINK libspdk_bdev_aio.so 00:03:10.571 CC module/bdev/raid/concat.o 00:03:10.571 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:10.862 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:10.862 SYMLINK libspdk_bdev_uring.so 00:03:10.862 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:10.862 LIB libspdk_bdev_iscsi.a 00:03:10.862 SO libspdk_bdev_iscsi.so.6.0 00:03:10.862 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:10.862 SYMLINK libspdk_bdev_iscsi.so 00:03:11.132 LIB libspdk_bdev_raid.a 00:03:11.132 SO libspdk_bdev_raid.so.6.0 00:03:11.132 SYMLINK libspdk_bdev_raid.so 00:03:11.391 LIB libspdk_bdev_virtio.a 00:03:11.391 SO libspdk_bdev_virtio.so.6.0 00:03:11.391 SYMLINK libspdk_bdev_virtio.so 00:03:12.325 LIB libspdk_bdev_nvme.a 00:03:12.325 SO libspdk_bdev_nvme.so.7.1 00:03:12.325 SYMLINK libspdk_bdev_nvme.so 00:03:12.890 CC module/event/subsystems/fsdev/fsdev.o 00:03:12.890 CC module/event/subsystems/sock/sock.o 00:03:12.890 CC module/event/subsystems/scheduler/scheduler.o 00:03:12.890 CC module/event/subsystems/iobuf/iobuf.o 00:03:12.890 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:12.891 CC module/event/subsystems/vmd/vmd.o 00:03:12.891 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:12.891 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:12.891 CC module/event/subsystems/keyring/keyring.o 00:03:12.891 LIB libspdk_event_keyring.a 00:03:12.891 LIB libspdk_event_fsdev.a 00:03:13.149 SO libspdk_event_keyring.so.1.0 00:03:13.149 LIB libspdk_event_sock.a 00:03:13.149 LIB libspdk_event_vhost_blk.a 00:03:13.149 SO libspdk_event_fsdev.so.1.0 00:03:13.149 LIB libspdk_event_vmd.a 00:03:13.149 LIB libspdk_event_iobuf.a 00:03:13.149 LIB libspdk_event_scheduler.a 00:03:13.149 SO libspdk_event_vhost_blk.so.3.0 00:03:13.149 SO libspdk_event_sock.so.5.0 00:03:13.149 SO libspdk_event_iobuf.so.3.0 00:03:13.149 SO libspdk_event_vmd.so.6.0 00:03:13.149 SO libspdk_event_scheduler.so.4.0 00:03:13.149 SYMLINK libspdk_event_fsdev.so 00:03:13.149 SYMLINK libspdk_event_keyring.so 00:03:13.149 SYMLINK libspdk_event_vhost_blk.so 00:03:13.149 SYMLINK libspdk_event_sock.so 00:03:13.149 SYMLINK libspdk_event_iobuf.so 00:03:13.149 SYMLINK libspdk_event_vmd.so 00:03:13.149 SYMLINK libspdk_event_scheduler.so 00:03:13.407 CC module/event/subsystems/accel/accel.o 00:03:13.665 LIB libspdk_event_accel.a 00:03:13.665 SO libspdk_event_accel.so.6.0 00:03:13.665 SYMLINK libspdk_event_accel.so 00:03:13.923 CC module/event/subsystems/bdev/bdev.o 00:03:14.233 LIB libspdk_event_bdev.a 00:03:14.233 SO libspdk_event_bdev.so.6.0 00:03:14.233 SYMLINK libspdk_event_bdev.so 00:03:14.491 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:14.491 CC module/event/subsystems/nbd/nbd.o 00:03:14.491 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:14.491 CC module/event/subsystems/ublk/ublk.o 00:03:14.491 CC module/event/subsystems/scsi/scsi.o 00:03:14.491 LIB libspdk_event_nbd.a 00:03:14.491 LIB libspdk_event_ublk.a 00:03:14.749 SO libspdk_event_ublk.so.3.0 00:03:14.749 SO libspdk_event_nbd.so.6.0 00:03:14.749 LIB libspdk_event_scsi.a 00:03:14.749 SO libspdk_event_scsi.so.6.0 00:03:14.749 SYMLINK libspdk_event_ublk.so 00:03:14.749 SYMLINK libspdk_event_nbd.so 00:03:14.749 LIB libspdk_event_nvmf.a 00:03:14.749 SYMLINK libspdk_event_scsi.so 00:03:14.749 SO libspdk_event_nvmf.so.6.0 00:03:14.749 SYMLINK libspdk_event_nvmf.so 00:03:15.008 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:15.008 CC module/event/subsystems/iscsi/iscsi.o 00:03:15.266 LIB libspdk_event_vhost_scsi.a 00:03:15.266 SO libspdk_event_vhost_scsi.so.3.0 00:03:15.266 SYMLINK libspdk_event_vhost_scsi.so 00:03:15.266 LIB libspdk_event_iscsi.a 00:03:15.266 SO libspdk_event_iscsi.so.6.0 00:03:15.266 SYMLINK libspdk_event_iscsi.so 00:03:15.524 SO libspdk.so.6.0 00:03:15.524 SYMLINK libspdk.so 00:03:15.782 CC app/trace_record/trace_record.o 00:03:15.782 CXX app/trace/trace.o 00:03:15.782 TEST_HEADER include/spdk/accel.h 00:03:15.782 TEST_HEADER include/spdk/accel_module.h 00:03:15.782 TEST_HEADER include/spdk/assert.h 00:03:15.782 TEST_HEADER include/spdk/barrier.h 00:03:15.782 TEST_HEADER include/spdk/base64.h 00:03:15.782 TEST_HEADER include/spdk/bdev.h 00:03:15.782 TEST_HEADER include/spdk/bdev_module.h 00:03:15.782 TEST_HEADER include/spdk/bdev_zone.h 00:03:15.782 TEST_HEADER include/spdk/bit_array.h 00:03:15.782 TEST_HEADER include/spdk/bit_pool.h 00:03:15.782 TEST_HEADER include/spdk/blob_bdev.h 00:03:15.782 CC app/iscsi_tgt/iscsi_tgt.o 00:03:15.782 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:15.782 TEST_HEADER include/spdk/blobfs.h 00:03:15.782 TEST_HEADER include/spdk/blob.h 00:03:15.782 TEST_HEADER include/spdk/conf.h 00:03:15.782 TEST_HEADER include/spdk/config.h 00:03:15.782 TEST_HEADER include/spdk/cpuset.h 00:03:15.782 TEST_HEADER include/spdk/crc16.h 00:03:15.782 CC app/nvmf_tgt/nvmf_main.o 00:03:15.782 TEST_HEADER include/spdk/crc32.h 00:03:15.782 TEST_HEADER include/spdk/crc64.h 00:03:15.782 TEST_HEADER include/spdk/dif.h 00:03:15.782 TEST_HEADER include/spdk/dma.h 00:03:15.782 TEST_HEADER include/spdk/endian.h 00:03:15.782 TEST_HEADER include/spdk/env_dpdk.h 00:03:15.782 TEST_HEADER include/spdk/env.h 00:03:15.782 TEST_HEADER include/spdk/event.h 00:03:15.782 TEST_HEADER include/spdk/fd_group.h 00:03:15.782 TEST_HEADER include/spdk/fd.h 00:03:15.782 TEST_HEADER include/spdk/file.h 00:03:15.782 TEST_HEADER include/spdk/fsdev.h 00:03:15.782 CC examples/util/zipf/zipf.o 00:03:15.782 TEST_HEADER include/spdk/fsdev_module.h 00:03:15.782 TEST_HEADER include/spdk/ftl.h 00:03:15.782 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:15.782 TEST_HEADER include/spdk/gpt_spec.h 00:03:15.782 CC test/thread/poller_perf/poller_perf.o 00:03:15.782 TEST_HEADER include/spdk/hexlify.h 00:03:15.782 TEST_HEADER include/spdk/histogram_data.h 00:03:16.041 TEST_HEADER include/spdk/idxd.h 00:03:16.041 TEST_HEADER include/spdk/idxd_spec.h 00:03:16.041 TEST_HEADER include/spdk/init.h 00:03:16.041 TEST_HEADER include/spdk/ioat.h 00:03:16.041 TEST_HEADER include/spdk/ioat_spec.h 00:03:16.041 TEST_HEADER include/spdk/iscsi_spec.h 00:03:16.041 TEST_HEADER include/spdk/json.h 00:03:16.041 TEST_HEADER include/spdk/jsonrpc.h 00:03:16.041 TEST_HEADER include/spdk/keyring.h 00:03:16.041 TEST_HEADER include/spdk/keyring_module.h 00:03:16.041 TEST_HEADER include/spdk/likely.h 00:03:16.041 TEST_HEADER include/spdk/log.h 00:03:16.041 CC test/app/bdev_svc/bdev_svc.o 00:03:16.041 CC test/dma/test_dma/test_dma.o 00:03:16.041 TEST_HEADER include/spdk/lvol.h 00:03:16.041 TEST_HEADER include/spdk/md5.h 00:03:16.041 TEST_HEADER include/spdk/memory.h 00:03:16.041 TEST_HEADER include/spdk/mmio.h 00:03:16.041 TEST_HEADER include/spdk/nbd.h 00:03:16.041 TEST_HEADER include/spdk/net.h 00:03:16.041 TEST_HEADER include/spdk/notify.h 00:03:16.041 TEST_HEADER include/spdk/nvme.h 00:03:16.041 TEST_HEADER include/spdk/nvme_intel.h 00:03:16.041 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:16.041 CC test/env/mem_callbacks/mem_callbacks.o 00:03:16.041 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:16.041 TEST_HEADER include/spdk/nvme_spec.h 00:03:16.041 TEST_HEADER include/spdk/nvme_zns.h 00:03:16.041 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:16.041 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:16.041 TEST_HEADER include/spdk/nvmf.h 00:03:16.041 TEST_HEADER include/spdk/nvmf_spec.h 00:03:16.041 TEST_HEADER include/spdk/nvmf_transport.h 00:03:16.041 TEST_HEADER include/spdk/opal.h 00:03:16.041 TEST_HEADER include/spdk/opal_spec.h 00:03:16.041 TEST_HEADER include/spdk/pci_ids.h 00:03:16.041 TEST_HEADER include/spdk/pipe.h 00:03:16.041 LINK spdk_trace_record 00:03:16.041 TEST_HEADER include/spdk/queue.h 00:03:16.041 LINK nvmf_tgt 00:03:16.041 TEST_HEADER include/spdk/reduce.h 00:03:16.041 TEST_HEADER include/spdk/rpc.h 00:03:16.041 TEST_HEADER include/spdk/scheduler.h 00:03:16.041 TEST_HEADER include/spdk/scsi.h 00:03:16.041 TEST_HEADER include/spdk/scsi_spec.h 00:03:16.041 LINK poller_perf 00:03:16.041 TEST_HEADER include/spdk/sock.h 00:03:16.041 TEST_HEADER include/spdk/stdinc.h 00:03:16.041 TEST_HEADER include/spdk/string.h 00:03:16.041 TEST_HEADER include/spdk/thread.h 00:03:16.041 TEST_HEADER include/spdk/trace.h 00:03:16.041 TEST_HEADER include/spdk/trace_parser.h 00:03:16.041 TEST_HEADER include/spdk/tree.h 00:03:16.041 LINK iscsi_tgt 00:03:16.041 TEST_HEADER include/spdk/ublk.h 00:03:16.041 TEST_HEADER include/spdk/util.h 00:03:16.041 TEST_HEADER include/spdk/uuid.h 00:03:16.041 LINK zipf 00:03:16.041 TEST_HEADER include/spdk/version.h 00:03:16.041 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:16.041 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:16.041 TEST_HEADER include/spdk/vhost.h 00:03:16.041 TEST_HEADER include/spdk/vmd.h 00:03:16.041 TEST_HEADER include/spdk/xor.h 00:03:16.041 TEST_HEADER include/spdk/zipf.h 00:03:16.041 CXX test/cpp_headers/accel.o 00:03:16.300 LINK bdev_svc 00:03:16.300 LINK spdk_trace 00:03:16.300 CC test/rpc_client/rpc_client_test.o 00:03:16.300 CXX test/cpp_headers/accel_module.o 00:03:16.300 CC app/spdk_tgt/spdk_tgt.o 00:03:16.559 CC examples/ioat/perf/perf.o 00:03:16.559 LINK test_dma 00:03:16.559 CC examples/vmd/lsvmd/lsvmd.o 00:03:16.559 LINK rpc_client_test 00:03:16.559 CXX test/cpp_headers/assert.o 00:03:16.559 CC examples/idxd/perf/perf.o 00:03:16.559 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:16.559 CC test/event/event_perf/event_perf.o 00:03:16.559 LINK spdk_tgt 00:03:16.559 LINK lsvmd 00:03:16.817 LINK ioat_perf 00:03:16.817 LINK mem_callbacks 00:03:16.817 CXX test/cpp_headers/barrier.o 00:03:16.817 CC test/event/reactor/reactor.o 00:03:16.817 LINK event_perf 00:03:16.817 CC examples/vmd/led/led.o 00:03:16.817 LINK idxd_perf 00:03:16.817 CC app/spdk_lspci/spdk_lspci.o 00:03:16.817 CC test/env/vtophys/vtophys.o 00:03:16.817 CXX test/cpp_headers/base64.o 00:03:17.077 LINK reactor 00:03:17.077 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:17.077 CC examples/ioat/verify/verify.o 00:03:17.077 LINK led 00:03:17.077 LINK nvme_fuzz 00:03:17.077 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:17.077 LINK spdk_lspci 00:03:17.077 LINK vtophys 00:03:17.077 CXX test/cpp_headers/bdev.o 00:03:17.077 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:17.077 LINK env_dpdk_post_init 00:03:17.336 CC test/event/reactor_perf/reactor_perf.o 00:03:17.336 LINK verify 00:03:17.336 CC test/event/app_repeat/app_repeat.o 00:03:17.336 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:17.336 CXX test/cpp_headers/bdev_module.o 00:03:17.336 CC test/event/scheduler/scheduler.o 00:03:17.336 CC app/spdk_nvme_perf/perf.o 00:03:17.336 LINK reactor_perf 00:03:17.336 CC test/env/memory/memory_ut.o 00:03:17.336 LINK app_repeat 00:03:17.594 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:17.594 CC test/accel/dif/dif.o 00:03:17.594 LINK scheduler 00:03:17.594 CXX test/cpp_headers/bdev_zone.o 00:03:17.594 CC test/env/pci/pci_ut.o 00:03:17.855 CC app/spdk_nvme_identify/identify.o 00:03:17.855 LINK vhost_fuzz 00:03:17.855 LINK interrupt_tgt 00:03:17.855 CXX test/cpp_headers/bit_array.o 00:03:17.855 CC test/app/histogram_perf/histogram_perf.o 00:03:17.855 CC test/app/jsoncat/jsoncat.o 00:03:18.116 CXX test/cpp_headers/bit_pool.o 00:03:18.116 LINK histogram_perf 00:03:18.116 LINK pci_ut 00:03:18.116 LINK jsoncat 00:03:18.116 CC examples/thread/thread/thread_ex.o 00:03:18.116 LINK dif 00:03:18.116 CXX test/cpp_headers/blob_bdev.o 00:03:18.376 LINK spdk_nvme_perf 00:03:18.376 CXX test/cpp_headers/blobfs_bdev.o 00:03:18.377 CXX test/cpp_headers/blobfs.o 00:03:18.377 CXX test/cpp_headers/blob.o 00:03:18.377 CXX test/cpp_headers/conf.o 00:03:18.377 LINK thread 00:03:18.377 CXX test/cpp_headers/config.o 00:03:18.377 CXX test/cpp_headers/cpuset.o 00:03:18.639 CXX test/cpp_headers/crc16.o 00:03:18.639 CC app/spdk_nvme_discover/discovery_aer.o 00:03:18.639 LINK spdk_nvme_identify 00:03:18.639 CXX test/cpp_headers/crc32.o 00:03:18.639 CC app/spdk_top/spdk_top.o 00:03:18.639 CC examples/sock/hello_world/hello_sock.o 00:03:18.639 LINK memory_ut 00:03:18.639 CXX test/cpp_headers/crc64.o 00:03:18.639 LINK iscsi_fuzz 00:03:18.897 CC test/app/stub/stub.o 00:03:18.897 LINK spdk_nvme_discover 00:03:18.897 CXX test/cpp_headers/dif.o 00:03:18.897 LINK hello_sock 00:03:18.897 CC app/vhost/vhost.o 00:03:18.897 CC test/blobfs/mkfs/mkfs.o 00:03:18.897 CXX test/cpp_headers/dma.o 00:03:18.897 LINK stub 00:03:18.897 CC test/nvme/aer/aer.o 00:03:19.155 CC test/lvol/esnap/esnap.o 00:03:19.155 LINK vhost 00:03:19.155 CXX test/cpp_headers/endian.o 00:03:19.155 LINK mkfs 00:03:19.155 CC test/bdev/bdevio/bdevio.o 00:03:19.155 CC examples/accel/perf/accel_perf.o 00:03:19.414 CC examples/blob/hello_world/hello_blob.o 00:03:19.414 CC examples/blob/cli/blobcli.o 00:03:19.414 LINK aer 00:03:19.414 CXX test/cpp_headers/env_dpdk.o 00:03:19.414 CXX test/cpp_headers/env.o 00:03:19.414 LINK spdk_top 00:03:19.672 CXX test/cpp_headers/event.o 00:03:19.672 CC examples/nvme/hello_world/hello_world.o 00:03:19.672 CC test/nvme/reset/reset.o 00:03:19.672 LINK hello_blob 00:03:19.672 CC examples/nvme/reconnect/reconnect.o 00:03:19.672 LINK bdevio 00:03:19.672 LINK accel_perf 00:03:19.672 CC app/spdk_dd/spdk_dd.o 00:03:19.672 CXX test/cpp_headers/fd_group.o 00:03:19.672 CXX test/cpp_headers/fd.o 00:03:19.930 LINK hello_world 00:03:19.930 CXX test/cpp_headers/file.o 00:03:19.930 LINK blobcli 00:03:19.930 LINK reset 00:03:19.930 CXX test/cpp_headers/fsdev.o 00:03:19.930 CXX test/cpp_headers/fsdev_module.o 00:03:19.930 LINK reconnect 00:03:20.188 CXX test/cpp_headers/ftl.o 00:03:20.188 CC test/nvme/sgl/sgl.o 00:03:20.188 CC test/nvme/e2edp/nvme_dp.o 00:03:20.188 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:20.188 CC examples/bdev/hello_world/hello_bdev.o 00:03:20.188 CC examples/bdev/bdevperf/bdevperf.o 00:03:20.188 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:20.188 LINK spdk_dd 00:03:20.448 CXX test/cpp_headers/fuse_dispatcher.o 00:03:20.448 CC app/fio/nvme/fio_plugin.o 00:03:20.448 LINK nvme_dp 00:03:20.448 LINK sgl 00:03:20.448 CXX test/cpp_headers/gpt_spec.o 00:03:20.448 LINK hello_fsdev 00:03:20.448 LINK hello_bdev 00:03:20.706 CC examples/nvme/arbitration/arbitration.o 00:03:20.706 CXX test/cpp_headers/hexlify.o 00:03:20.706 CC examples/nvme/hotplug/hotplug.o 00:03:20.706 CC test/nvme/overhead/overhead.o 00:03:20.706 LINK nvme_manage 00:03:20.706 CC test/nvme/err_injection/err_injection.o 00:03:20.706 CXX test/cpp_headers/histogram_data.o 00:03:20.964 CC app/fio/bdev/fio_plugin.o 00:03:20.964 LINK spdk_nvme 00:03:20.964 LINK arbitration 00:03:20.964 LINK hotplug 00:03:20.964 LINK err_injection 00:03:20.964 CXX test/cpp_headers/idxd.o 00:03:20.964 LINK bdevperf 00:03:20.964 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:21.222 LINK overhead 00:03:21.222 CC examples/nvme/abort/abort.o 00:03:21.222 CXX test/cpp_headers/idxd_spec.o 00:03:21.222 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:21.222 LINK cmb_copy 00:03:21.222 CC test/nvme/startup/startup.o 00:03:21.222 CXX test/cpp_headers/init.o 00:03:21.222 CC test/nvme/reserve/reserve.o 00:03:21.480 CC test/nvme/simple_copy/simple_copy.o 00:03:21.480 CC test/nvme/connect_stress/connect_stress.o 00:03:21.480 LINK spdk_bdev 00:03:21.480 LINK pmr_persistence 00:03:21.480 LINK startup 00:03:21.480 CXX test/cpp_headers/ioat.o 00:03:21.480 LINK abort 00:03:21.480 CC test/nvme/boot_partition/boot_partition.o 00:03:21.480 LINK reserve 00:03:21.740 LINK connect_stress 00:03:21.741 LINK simple_copy 00:03:21.741 CC test/nvme/compliance/nvme_compliance.o 00:03:21.741 CXX test/cpp_headers/ioat_spec.o 00:03:21.741 CC test/nvme/fused_ordering/fused_ordering.o 00:03:21.741 LINK boot_partition 00:03:21.741 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:21.741 CXX test/cpp_headers/iscsi_spec.o 00:03:21.741 CC test/nvme/fdp/fdp.o 00:03:21.741 CXX test/cpp_headers/json.o 00:03:21.999 CC test/nvme/cuse/cuse.o 00:03:21.999 CXX test/cpp_headers/jsonrpc.o 00:03:21.999 CC examples/nvmf/nvmf/nvmf.o 00:03:21.999 LINK fused_ordering 00:03:21.999 LINK doorbell_aers 00:03:21.999 CXX test/cpp_headers/keyring.o 00:03:21.999 LINK nvme_compliance 00:03:21.999 CXX test/cpp_headers/keyring_module.o 00:03:21.999 CXX test/cpp_headers/likely.o 00:03:21.999 CXX test/cpp_headers/log.o 00:03:22.258 CXX test/cpp_headers/lvol.o 00:03:22.258 CXX test/cpp_headers/md5.o 00:03:22.258 LINK fdp 00:03:22.258 CXX test/cpp_headers/memory.o 00:03:22.258 CXX test/cpp_headers/mmio.o 00:03:22.258 LINK nvmf 00:03:22.258 CXX test/cpp_headers/nbd.o 00:03:22.258 CXX test/cpp_headers/net.o 00:03:22.258 CXX test/cpp_headers/notify.o 00:03:22.258 CXX test/cpp_headers/nvme.o 00:03:22.258 CXX test/cpp_headers/nvme_intel.o 00:03:22.258 CXX test/cpp_headers/nvme_ocssd.o 00:03:22.517 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:22.517 CXX test/cpp_headers/nvme_spec.o 00:03:22.517 CXX test/cpp_headers/nvme_zns.o 00:03:22.517 CXX test/cpp_headers/nvmf_cmd.o 00:03:22.517 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:22.517 CXX test/cpp_headers/nvmf.o 00:03:22.517 CXX test/cpp_headers/nvmf_spec.o 00:03:22.517 CXX test/cpp_headers/nvmf_transport.o 00:03:22.517 CXX test/cpp_headers/opal.o 00:03:22.517 CXX test/cpp_headers/opal_spec.o 00:03:22.776 CXX test/cpp_headers/pci_ids.o 00:03:22.776 CXX test/cpp_headers/pipe.o 00:03:22.776 CXX test/cpp_headers/queue.o 00:03:22.776 CXX test/cpp_headers/reduce.o 00:03:22.776 CXX test/cpp_headers/rpc.o 00:03:22.776 CXX test/cpp_headers/scheduler.o 00:03:22.776 CXX test/cpp_headers/scsi.o 00:03:22.776 CXX test/cpp_headers/scsi_spec.o 00:03:22.776 CXX test/cpp_headers/sock.o 00:03:22.776 CXX test/cpp_headers/stdinc.o 00:03:22.776 CXX test/cpp_headers/string.o 00:03:22.776 CXX test/cpp_headers/thread.o 00:03:23.034 CXX test/cpp_headers/trace.o 00:03:23.034 CXX test/cpp_headers/trace_parser.o 00:03:23.034 CXX test/cpp_headers/tree.o 00:03:23.034 CXX test/cpp_headers/ublk.o 00:03:23.034 CXX test/cpp_headers/util.o 00:03:23.034 CXX test/cpp_headers/uuid.o 00:03:23.034 CXX test/cpp_headers/version.o 00:03:23.034 CXX test/cpp_headers/vfio_user_pci.o 00:03:23.034 CXX test/cpp_headers/vfio_user_spec.o 00:03:23.034 CXX test/cpp_headers/vhost.o 00:03:23.034 CXX test/cpp_headers/vmd.o 00:03:23.034 CXX test/cpp_headers/xor.o 00:03:23.034 CXX test/cpp_headers/zipf.o 00:03:23.293 LINK cuse 00:03:24.670 LINK esnap 00:03:24.928 00:03:24.928 real 1m31.256s 00:03:24.928 user 8m20.152s 00:03:24.928 sys 1m41.269s 00:03:24.928 20:24:25 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:24.928 20:24:25 make -- common/autotest_common.sh@10 -- $ set +x 00:03:24.928 ************************************ 00:03:24.928 END TEST make 00:03:24.928 ************************************ 00:03:24.928 20:24:25 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:24.928 20:24:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:24.928 20:24:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:24.928 20:24:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.928 20:24:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:24.928 20:24:25 -- pm/common@44 -- $ pid=5240 00:03:24.928 20:24:25 -- pm/common@50 -- $ kill -TERM 5240 00:03:24.928 20:24:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.928 20:24:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:24.928 20:24:25 -- pm/common@44 -- $ pid=5242 00:03:24.928 20:24:25 -- pm/common@50 -- $ kill -TERM 5242 00:03:24.928 20:24:25 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:24.928 20:24:25 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:24.928 20:24:25 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:24.928 20:24:25 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:24.928 20:24:25 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:25.188 20:24:25 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:25.188 20:24:25 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:25.188 20:24:25 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:25.188 20:24:25 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:25.188 20:24:25 -- scripts/common.sh@336 -- # IFS=.-: 00:03:25.188 20:24:25 -- scripts/common.sh@336 -- # read -ra ver1 00:03:25.188 20:24:25 -- scripts/common.sh@337 -- # IFS=.-: 00:03:25.188 20:24:25 -- scripts/common.sh@337 -- # read -ra ver2 00:03:25.188 20:24:25 -- scripts/common.sh@338 -- # local 'op=<' 00:03:25.188 20:24:25 -- scripts/common.sh@340 -- # ver1_l=2 00:03:25.188 20:24:25 -- scripts/common.sh@341 -- # ver2_l=1 00:03:25.188 20:24:25 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:25.188 20:24:25 -- scripts/common.sh@344 -- # case "$op" in 00:03:25.188 20:24:25 -- scripts/common.sh@345 -- # : 1 00:03:25.188 20:24:25 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:25.188 20:24:25 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:25.188 20:24:25 -- scripts/common.sh@365 -- # decimal 1 00:03:25.188 20:24:25 -- scripts/common.sh@353 -- # local d=1 00:03:25.188 20:24:25 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:25.188 20:24:25 -- scripts/common.sh@355 -- # echo 1 00:03:25.188 20:24:25 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:25.188 20:24:25 -- scripts/common.sh@366 -- # decimal 2 00:03:25.188 20:24:25 -- scripts/common.sh@353 -- # local d=2 00:03:25.188 20:24:25 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:25.188 20:24:25 -- scripts/common.sh@355 -- # echo 2 00:03:25.188 20:24:25 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:25.188 20:24:25 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:25.188 20:24:25 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:25.188 20:24:25 -- scripts/common.sh@368 -- # return 0 00:03:25.188 20:24:25 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:25.188 20:24:25 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:25.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.188 --rc genhtml_branch_coverage=1 00:03:25.188 --rc genhtml_function_coverage=1 00:03:25.188 --rc genhtml_legend=1 00:03:25.188 --rc geninfo_all_blocks=1 00:03:25.188 --rc geninfo_unexecuted_blocks=1 00:03:25.188 00:03:25.188 ' 00:03:25.188 20:24:25 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:25.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.188 --rc genhtml_branch_coverage=1 00:03:25.188 --rc genhtml_function_coverage=1 00:03:25.188 --rc genhtml_legend=1 00:03:25.188 --rc geninfo_all_blocks=1 00:03:25.188 --rc geninfo_unexecuted_blocks=1 00:03:25.188 00:03:25.188 ' 00:03:25.188 20:24:25 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:25.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.188 --rc genhtml_branch_coverage=1 00:03:25.188 --rc genhtml_function_coverage=1 00:03:25.188 --rc genhtml_legend=1 00:03:25.188 --rc geninfo_all_blocks=1 00:03:25.188 --rc geninfo_unexecuted_blocks=1 00:03:25.188 00:03:25.188 ' 00:03:25.188 20:24:25 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:25.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.188 --rc genhtml_branch_coverage=1 00:03:25.188 --rc genhtml_function_coverage=1 00:03:25.188 --rc genhtml_legend=1 00:03:25.188 --rc geninfo_all_blocks=1 00:03:25.188 --rc geninfo_unexecuted_blocks=1 00:03:25.188 00:03:25.188 ' 00:03:25.188 20:24:25 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:25.188 20:24:25 -- nvmf/common.sh@7 -- # uname -s 00:03:25.188 20:24:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:25.188 20:24:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:25.188 20:24:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:25.188 20:24:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:25.188 20:24:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:25.188 20:24:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:25.188 20:24:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:25.188 20:24:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:25.188 20:24:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:25.188 20:24:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:25.188 20:24:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:03:25.188 20:24:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:03:25.188 20:24:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:25.188 20:24:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:25.188 20:24:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:25.188 20:24:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:25.188 20:24:25 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:25.188 20:24:25 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:25.188 20:24:25 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:25.188 20:24:25 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:25.188 20:24:25 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:25.188 20:24:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.188 20:24:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.188 20:24:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.188 20:24:25 -- paths/export.sh@5 -- # export PATH 00:03:25.188 20:24:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.188 20:24:25 -- nvmf/common.sh@51 -- # : 0 00:03:25.188 20:24:25 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:25.188 20:24:25 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:25.188 20:24:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:25.188 20:24:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:25.188 20:24:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:25.188 20:24:25 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:25.188 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:25.188 20:24:25 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:25.188 20:24:25 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:25.188 20:24:25 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:25.188 20:24:25 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:25.188 20:24:25 -- spdk/autotest.sh@32 -- # uname -s 00:03:25.188 20:24:25 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:25.188 20:24:25 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:25.188 20:24:25 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:25.188 20:24:25 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:25.188 20:24:25 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:25.188 20:24:25 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:25.188 20:24:25 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:25.188 20:24:25 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:25.188 20:24:25 -- spdk/autotest.sh@48 -- # udevadm_pid=54361 00:03:25.188 20:24:25 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:25.188 20:24:25 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:25.188 20:24:25 -- pm/common@17 -- # local monitor 00:03:25.188 20:24:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.188 20:24:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.189 20:24:25 -- pm/common@21 -- # date +%s 00:03:25.189 20:24:25 -- pm/common@21 -- # date +%s 00:03:25.189 20:24:25 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732652665 00:03:25.189 20:24:25 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732652665 00:03:25.189 20:24:25 -- pm/common@25 -- # sleep 1 00:03:25.189 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732652665_collect-cpu-load.pm.log 00:03:25.189 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732652665_collect-vmstat.pm.log 00:03:26.125 20:24:26 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:26.125 20:24:26 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:26.125 20:24:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:26.125 20:24:26 -- common/autotest_common.sh@10 -- # set +x 00:03:26.125 20:24:26 -- spdk/autotest.sh@59 -- # create_test_list 00:03:26.125 20:24:26 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:26.125 20:24:26 -- common/autotest_common.sh@10 -- # set +x 00:03:26.384 20:24:26 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:26.384 20:24:26 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:26.384 20:24:26 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:26.384 20:24:26 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:26.384 20:24:26 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:26.384 20:24:26 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:26.384 20:24:26 -- common/autotest_common.sh@1457 -- # uname 00:03:26.384 20:24:26 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:26.384 20:24:26 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:26.384 20:24:26 -- common/autotest_common.sh@1477 -- # uname 00:03:26.384 20:24:26 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:26.384 20:24:26 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:26.384 20:24:26 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:26.384 lcov: LCOV version 1.15 00:03:26.384 20:24:26 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:44.471 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:44.471 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:02.618 20:25:00 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:02.618 20:25:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.618 20:25:00 -- common/autotest_common.sh@10 -- # set +x 00:04:02.618 20:25:00 -- spdk/autotest.sh@78 -- # rm -f 00:04:02.618 20:25:00 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:02.618 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.618 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:02.618 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:02.618 20:25:01 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:02.618 20:25:01 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:02.618 20:25:01 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:02.618 20:25:01 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:02.618 20:25:01 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:02.618 20:25:01 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:02.618 20:25:01 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:02.618 20:25:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:02.618 20:25:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:02.618 20:25:01 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:02.618 20:25:01 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:02.618 20:25:01 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:02.618 20:25:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:02.618 20:25:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:02.618 20:25:01 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:02.618 20:25:01 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:02.618 20:25:01 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:02.618 20:25:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:02.618 20:25:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:02.618 20:25:01 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:02.618 20:25:01 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:02.618 20:25:01 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:02.618 20:25:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:02.618 20:25:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:02.618 20:25:01 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:02.618 20:25:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:02.618 20:25:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:02.618 20:25:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:02.618 20:25:01 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:02.618 20:25:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:02.618 No valid GPT data, bailing 00:04:02.618 20:25:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:02.618 20:25:01 -- scripts/common.sh@394 -- # pt= 00:04:02.618 20:25:01 -- scripts/common.sh@395 -- # return 1 00:04:02.618 20:25:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:02.618 1+0 records in 00:04:02.618 1+0 records out 00:04:02.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00408899 s, 256 MB/s 00:04:02.618 20:25:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:02.618 20:25:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:02.618 20:25:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:02.618 20:25:01 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:02.618 20:25:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:02.618 No valid GPT data, bailing 00:04:02.618 20:25:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:02.618 20:25:01 -- scripts/common.sh@394 -- # pt= 00:04:02.618 20:25:01 -- scripts/common.sh@395 -- # return 1 00:04:02.618 20:25:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:02.618 1+0 records in 00:04:02.618 1+0 records out 00:04:02.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0046308 s, 226 MB/s 00:04:02.618 20:25:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:02.618 20:25:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:02.618 20:25:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:02.618 20:25:01 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:02.618 20:25:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:02.618 No valid GPT data, bailing 00:04:02.618 20:25:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:02.618 20:25:01 -- scripts/common.sh@394 -- # pt= 00:04:02.618 20:25:01 -- scripts/common.sh@395 -- # return 1 00:04:02.618 20:25:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:02.618 1+0 records in 00:04:02.618 1+0 records out 00:04:02.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0045649 s, 230 MB/s 00:04:02.618 20:25:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:02.618 20:25:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:02.618 20:25:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:02.618 20:25:01 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:02.618 20:25:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:02.618 No valid GPT data, bailing 00:04:02.618 20:25:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:02.618 20:25:01 -- scripts/common.sh@394 -- # pt= 00:04:02.618 20:25:01 -- scripts/common.sh@395 -- # return 1 00:04:02.618 20:25:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:02.618 1+0 records in 00:04:02.618 1+0 records out 00:04:02.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450928 s, 233 MB/s 00:04:02.618 20:25:01 -- spdk/autotest.sh@105 -- # sync 00:04:02.618 20:25:01 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:02.618 20:25:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:02.618 20:25:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:03.185 20:25:03 -- spdk/autotest.sh@111 -- # uname -s 00:04:03.185 20:25:03 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:03.185 20:25:03 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:03.185 20:25:03 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:03.752 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.752 Hugepages 00:04:03.752 node hugesize free / total 00:04:03.752 node0 1048576kB 0 / 0 00:04:03.752 node0 2048kB 0 / 0 00:04:03.752 00:04:03.752 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:04.057 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:04.057 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:04.057 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:04.057 20:25:04 -- spdk/autotest.sh@117 -- # uname -s 00:04:04.057 20:25:04 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:04.057 20:25:04 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:04.057 20:25:04 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.624 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.883 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.883 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.883 20:25:05 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:06.260 20:25:06 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:06.260 20:25:06 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:06.260 20:25:06 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:06.260 20:25:06 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:06.260 20:25:06 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:06.261 20:25:06 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:06.261 20:25:06 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:06.261 20:25:06 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:06.261 20:25:06 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:06.261 20:25:06 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:06.261 20:25:06 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:06.261 20:25:06 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:06.261 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.261 Waiting for block devices as requested 00:04:06.519 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:06.519 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:06.519 20:25:06 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:06.519 20:25:06 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:06.519 20:25:06 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:06.519 20:25:06 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:06.519 20:25:06 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:06.519 20:25:06 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:06.519 20:25:06 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:06.519 20:25:06 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:06.519 20:25:06 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:06.519 20:25:06 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:06.519 20:25:06 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:06.519 20:25:06 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:06.519 20:25:06 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:06.519 20:25:06 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:06.519 20:25:06 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:06.519 20:25:06 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:06.519 20:25:06 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:06.519 20:25:06 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:06.519 20:25:06 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:06.519 20:25:06 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:06.519 20:25:06 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:06.519 20:25:06 -- common/autotest_common.sh@1543 -- # continue 00:04:06.519 20:25:06 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:06.519 20:25:06 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:06.519 20:25:06 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:06.519 20:25:06 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:06.519 20:25:06 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:06.519 20:25:06 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:06.519 20:25:06 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:06.519 20:25:06 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:06.519 20:25:06 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:06.519 20:25:06 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:06.519 20:25:06 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:06.519 20:25:06 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:06.519 20:25:06 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:06.519 20:25:06 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:06.519 20:25:06 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:06.519 20:25:06 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:06.519 20:25:06 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:06.520 20:25:06 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:06.520 20:25:06 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:06.520 20:25:06 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:06.520 20:25:06 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:06.520 20:25:06 -- common/autotest_common.sh@1543 -- # continue 00:04:06.520 20:25:06 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:06.520 20:25:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:06.520 20:25:06 -- common/autotest_common.sh@10 -- # set +x 00:04:06.778 20:25:06 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:06.778 20:25:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:06.778 20:25:06 -- common/autotest_common.sh@10 -- # set +x 00:04:06.778 20:25:06 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:07.342 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.342 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.342 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.683 20:25:07 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:07.683 20:25:07 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:07.683 20:25:07 -- common/autotest_common.sh@10 -- # set +x 00:04:07.683 20:25:07 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:07.683 20:25:07 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:07.683 20:25:07 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:07.683 20:25:07 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:07.683 20:25:07 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:07.683 20:25:07 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:07.683 20:25:07 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:07.683 20:25:07 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:07.683 20:25:07 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:07.683 20:25:07 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:07.683 20:25:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:07.683 20:25:07 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:07.683 20:25:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:07.683 20:25:07 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:07.683 20:25:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:07.683 20:25:07 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:07.683 20:25:07 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:07.683 20:25:07 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:07.683 20:25:07 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:07.683 20:25:07 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:07.683 20:25:07 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:07.683 20:25:07 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:07.683 20:25:07 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:07.683 20:25:07 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:07.683 20:25:07 -- common/autotest_common.sh@1572 -- # return 0 00:04:07.683 20:25:07 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:07.683 20:25:07 -- common/autotest_common.sh@1580 -- # return 0 00:04:07.683 20:25:07 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:07.683 20:25:07 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:07.683 20:25:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:07.683 20:25:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:07.683 20:25:07 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:07.683 20:25:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:07.683 20:25:07 -- common/autotest_common.sh@10 -- # set +x 00:04:07.683 20:25:07 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:07.683 20:25:07 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:07.683 20:25:07 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:07.683 20:25:07 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:07.683 20:25:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.683 20:25:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.683 20:25:07 -- common/autotest_common.sh@10 -- # set +x 00:04:07.683 ************************************ 00:04:07.683 START TEST env 00:04:07.683 ************************************ 00:04:07.683 20:25:07 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:07.683 * Looking for test storage... 00:04:07.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:07.683 20:25:07 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:07.683 20:25:07 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:07.683 20:25:07 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:07.683 20:25:08 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:07.683 20:25:08 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.683 20:25:08 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.683 20:25:08 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.683 20:25:08 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.683 20:25:08 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.683 20:25:08 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.683 20:25:08 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.683 20:25:08 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.683 20:25:08 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.683 20:25:08 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.683 20:25:08 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.683 20:25:08 env -- scripts/common.sh@344 -- # case "$op" in 00:04:07.683 20:25:08 env -- scripts/common.sh@345 -- # : 1 00:04:07.683 20:25:08 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.683 20:25:08 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.683 20:25:08 env -- scripts/common.sh@365 -- # decimal 1 00:04:07.683 20:25:08 env -- scripts/common.sh@353 -- # local d=1 00:04:07.684 20:25:08 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.684 20:25:08 env -- scripts/common.sh@355 -- # echo 1 00:04:07.684 20:25:08 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.684 20:25:08 env -- scripts/common.sh@366 -- # decimal 2 00:04:07.684 20:25:08 env -- scripts/common.sh@353 -- # local d=2 00:04:07.945 20:25:08 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.945 20:25:08 env -- scripts/common.sh@355 -- # echo 2 00:04:07.945 20:25:08 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.945 20:25:08 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.945 20:25:08 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.945 20:25:08 env -- scripts/common.sh@368 -- # return 0 00:04:07.945 20:25:08 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.945 20:25:08 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:07.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.945 --rc genhtml_branch_coverage=1 00:04:07.945 --rc genhtml_function_coverage=1 00:04:07.945 --rc genhtml_legend=1 00:04:07.945 --rc geninfo_all_blocks=1 00:04:07.945 --rc geninfo_unexecuted_blocks=1 00:04:07.945 00:04:07.945 ' 00:04:07.945 20:25:08 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:07.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.945 --rc genhtml_branch_coverage=1 00:04:07.945 --rc genhtml_function_coverage=1 00:04:07.945 --rc genhtml_legend=1 00:04:07.945 --rc geninfo_all_blocks=1 00:04:07.945 --rc geninfo_unexecuted_blocks=1 00:04:07.945 00:04:07.945 ' 00:04:07.945 20:25:08 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:07.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.945 --rc genhtml_branch_coverage=1 00:04:07.945 --rc genhtml_function_coverage=1 00:04:07.945 --rc genhtml_legend=1 00:04:07.945 --rc geninfo_all_blocks=1 00:04:07.945 --rc geninfo_unexecuted_blocks=1 00:04:07.945 00:04:07.945 ' 00:04:07.945 20:25:08 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:07.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.945 --rc genhtml_branch_coverage=1 00:04:07.945 --rc genhtml_function_coverage=1 00:04:07.945 --rc genhtml_legend=1 00:04:07.945 --rc geninfo_all_blocks=1 00:04:07.945 --rc geninfo_unexecuted_blocks=1 00:04:07.945 00:04:07.945 ' 00:04:07.945 20:25:08 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:07.945 20:25:08 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.945 20:25:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.945 20:25:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.945 ************************************ 00:04:07.945 START TEST env_memory 00:04:07.945 ************************************ 00:04:07.945 20:25:08 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:07.945 00:04:07.945 00:04:07.945 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.945 http://cunit.sourceforge.net/ 00:04:07.945 00:04:07.945 00:04:07.945 Suite: memory 00:04:07.945 Test: alloc and free memory map ...[2024-11-26 20:25:08.098524] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:07.945 passed 00:04:07.945 Test: mem map translation ...[2024-11-26 20:25:08.130779] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:07.945 [2024-11-26 20:25:08.131198] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:07.945 [2024-11-26 20:25:08.131572] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:07.945 [2024-11-26 20:25:08.131920] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:07.945 passed 00:04:07.945 Test: mem map registration ...[2024-11-26 20:25:08.196540] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:07.945 [2024-11-26 20:25:08.196914] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:07.945 passed 00:04:07.945 Test: mem map adjacent registrations ...passed 00:04:07.945 00:04:07.945 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.945 suites 1 1 n/a 0 0 00:04:07.945 tests 4 4 4 0 0 00:04:07.945 asserts 152 152 152 0 n/a 00:04:07.945 00:04:07.945 Elapsed time = 0.218 seconds 00:04:07.945 00:04:07.945 real 0m0.238s 00:04:07.945 user 0m0.220s 00:04:07.945 sys 0m0.011s 00:04:07.945 20:25:08 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.945 ************************************ 00:04:07.945 END TEST env_memory 00:04:07.945 ************************************ 00:04:07.945 20:25:08 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:08.206 20:25:08 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:08.206 20:25:08 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.206 20:25:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.206 20:25:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.206 ************************************ 00:04:08.206 START TEST env_vtophys 00:04:08.206 ************************************ 00:04:08.206 20:25:08 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:08.206 EAL: lib.eal log level changed from notice to debug 00:04:08.206 EAL: Detected lcore 0 as core 0 on socket 0 00:04:08.206 EAL: Detected lcore 1 as core 0 on socket 0 00:04:08.206 EAL: Detected lcore 2 as core 0 on socket 0 00:04:08.206 EAL: Detected lcore 3 as core 0 on socket 0 00:04:08.206 EAL: Detected lcore 4 as core 0 on socket 0 00:04:08.206 EAL: Detected lcore 5 as core 0 on socket 0 00:04:08.206 EAL: Detected lcore 6 as core 0 on socket 0 00:04:08.206 EAL: Detected lcore 7 as core 0 on socket 0 00:04:08.206 EAL: Detected lcore 8 as core 0 on socket 0 00:04:08.206 EAL: Detected lcore 9 as core 0 on socket 0 00:04:08.206 EAL: Maximum logical cores by configuration: 128 00:04:08.206 EAL: Detected CPU lcores: 10 00:04:08.206 EAL: Detected NUMA nodes: 1 00:04:08.206 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:08.206 EAL: Detected shared linkage of DPDK 00:04:08.206 EAL: No shared files mode enabled, IPC will be disabled 00:04:08.206 EAL: Selected IOVA mode 'PA' 00:04:08.206 EAL: Probing VFIO support... 00:04:08.206 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:08.206 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:08.206 EAL: Ask a virtual area of 0x2e000 bytes 00:04:08.206 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:08.206 EAL: Setting up physically contiguous memory... 00:04:08.206 EAL: Setting maximum number of open files to 524288 00:04:08.206 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:08.206 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:08.206 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.206 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:08.206 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.206 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.206 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:08.206 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:08.206 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.206 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:08.206 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.206 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.206 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:08.206 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:08.206 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.206 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:08.206 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.206 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.206 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:08.206 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:08.206 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.206 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:08.206 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.206 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.206 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:08.206 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:08.206 EAL: Hugepages will be freed exactly as allocated. 00:04:08.206 EAL: No shared files mode enabled, IPC is disabled 00:04:08.206 EAL: No shared files mode enabled, IPC is disabled 00:04:08.206 EAL: TSC frequency is ~2200000 KHz 00:04:08.206 EAL: Main lcore 0 is ready (tid=7f231c25da00;cpuset=[0]) 00:04:08.206 EAL: Trying to obtain current memory policy. 00:04:08.206 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.206 EAL: Restoring previous memory policy: 0 00:04:08.206 EAL: request: mp_malloc_sync 00:04:08.206 EAL: No shared files mode enabled, IPC is disabled 00:04:08.206 EAL: Heap on socket 0 was expanded by 2MB 00:04:08.206 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:08.206 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:08.206 EAL: Mem event callback 'spdk:(nil)' registered 00:04:08.206 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:08.206 00:04:08.206 00:04:08.206 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.206 http://cunit.sourceforge.net/ 00:04:08.206 00:04:08.206 00:04:08.206 Suite: components_suite 00:04:08.206 Test: vtophys_malloc_test ...passed 00:04:08.206 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:08.206 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.206 EAL: Restoring previous memory policy: 4 00:04:08.206 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.206 EAL: request: mp_malloc_sync 00:04:08.206 EAL: No shared files mode enabled, IPC is disabled 00:04:08.206 EAL: Heap on socket 0 was expanded by 4MB 00:04:08.206 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.206 EAL: request: mp_malloc_sync 00:04:08.206 EAL: No shared files mode enabled, IPC is disabled 00:04:08.206 EAL: Heap on socket 0 was shrunk by 4MB 00:04:08.206 EAL: Trying to obtain current memory policy. 00:04:08.206 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.206 EAL: Restoring previous memory policy: 4 00:04:08.206 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.206 EAL: request: mp_malloc_sync 00:04:08.206 EAL: No shared files mode enabled, IPC is disabled 00:04:08.206 EAL: Heap on socket 0 was expanded by 6MB 00:04:08.206 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.206 EAL: request: mp_malloc_sync 00:04:08.206 EAL: No shared files mode enabled, IPC is disabled 00:04:08.206 EAL: Heap on socket 0 was shrunk by 6MB 00:04:08.206 EAL: Trying to obtain current memory policy. 00:04:08.206 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.206 EAL: Restoring previous memory policy: 4 00:04:08.206 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.206 EAL: request: mp_malloc_sync 00:04:08.206 EAL: No shared files mode enabled, IPC is disabled 00:04:08.206 EAL: Heap on socket 0 was expanded by 10MB 00:04:08.206 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.206 EAL: request: mp_malloc_sync 00:04:08.206 EAL: No shared files mode enabled, IPC is disabled 00:04:08.206 EAL: Heap on socket 0 was shrunk by 10MB 00:04:08.206 EAL: Trying to obtain current memory policy. 00:04:08.206 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.206 EAL: Restoring previous memory policy: 4 00:04:08.206 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.206 EAL: request: mp_malloc_sync 00:04:08.206 EAL: No shared files mode enabled, IPC is disabled 00:04:08.206 EAL: Heap on socket 0 was expanded by 18MB 00:04:08.206 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.206 EAL: request: mp_malloc_sync 00:04:08.206 EAL: No shared files mode enabled, IPC is disabled 00:04:08.206 EAL: Heap on socket 0 was shrunk by 18MB 00:04:08.206 EAL: Trying to obtain current memory policy. 00:04:08.206 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.206 EAL: Restoring previous memory policy: 4 00:04:08.206 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.206 EAL: request: mp_malloc_sync 00:04:08.206 EAL: No shared files mode enabled, IPC is disabled 00:04:08.206 EAL: Heap on socket 0 was expanded by 34MB 00:04:08.206 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.206 EAL: request: mp_malloc_sync 00:04:08.206 EAL: No shared files mode enabled, IPC is disabled 00:04:08.206 EAL: Heap on socket 0 was shrunk by 34MB 00:04:08.206 EAL: Trying to obtain current memory policy. 00:04:08.206 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.465 EAL: Restoring previous memory policy: 4 00:04:08.465 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.465 EAL: request: mp_malloc_sync 00:04:08.466 EAL: No shared files mode enabled, IPC is disabled 00:04:08.466 EAL: Heap on socket 0 was expanded by 66MB 00:04:08.466 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.466 EAL: request: mp_malloc_sync 00:04:08.466 EAL: No shared files mode enabled, IPC is disabled 00:04:08.466 EAL: Heap on socket 0 was shrunk by 66MB 00:04:08.466 EAL: Trying to obtain current memory policy. 00:04:08.466 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.466 EAL: Restoring previous memory policy: 4 00:04:08.466 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.466 EAL: request: mp_malloc_sync 00:04:08.466 EAL: No shared files mode enabled, IPC is disabled 00:04:08.466 EAL: Heap on socket 0 was expanded by 130MB 00:04:08.466 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.466 EAL: request: mp_malloc_sync 00:04:08.466 EAL: No shared files mode enabled, IPC is disabled 00:04:08.466 EAL: Heap on socket 0 was shrunk by 130MB 00:04:08.466 EAL: Trying to obtain current memory policy. 00:04:08.466 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.466 EAL: Restoring previous memory policy: 4 00:04:08.466 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.466 EAL: request: mp_malloc_sync 00:04:08.466 EAL: No shared files mode enabled, IPC is disabled 00:04:08.466 EAL: Heap on socket 0 was expanded by 258MB 00:04:08.466 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.725 EAL: request: mp_malloc_sync 00:04:08.725 EAL: No shared files mode enabled, IPC is disabled 00:04:08.725 EAL: Heap on socket 0 was shrunk by 258MB 00:04:08.725 EAL: Trying to obtain current memory policy. 00:04:08.725 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.725 EAL: Restoring previous memory policy: 4 00:04:08.725 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.725 EAL: request: mp_malloc_sync 00:04:08.725 EAL: No shared files mode enabled, IPC is disabled 00:04:08.725 EAL: Heap on socket 0 was expanded by 514MB 00:04:08.725 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.014 EAL: request: mp_malloc_sync 00:04:09.014 EAL: No shared files mode enabled, IPC is disabled 00:04:09.014 EAL: Heap on socket 0 was shrunk by 514MB 00:04:09.014 EAL: Trying to obtain current memory policy. 00:04:09.014 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.280 EAL: Restoring previous memory policy: 4 00:04:09.280 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.280 EAL: request: mp_malloc_sync 00:04:09.280 EAL: No shared files mode enabled, IPC is disabled 00:04:09.280 EAL: Heap on socket 0 was expanded by 1026MB 00:04:09.280 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.540 passed 00:04:09.540 00:04:09.540 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.540 suites 1 1 n/a 0 0 00:04:09.540 tests 2 2 2 0 0 00:04:09.540 asserts 5344 5344 5344 0 n/a 00:04:09.540 00:04:09.540 Elapsed time = 1.251 seconds 00:04:09.540 EAL: request: mp_malloc_sync 00:04:09.540 EAL: No shared files mode enabled, IPC is disabled 00:04:09.540 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:09.540 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.540 EAL: request: mp_malloc_sync 00:04:09.540 EAL: No shared files mode enabled, IPC is disabled 00:04:09.540 EAL: Heap on socket 0 was shrunk by 2MB 00:04:09.540 EAL: No shared files mode enabled, IPC is disabled 00:04:09.540 EAL: No shared files mode enabled, IPC is disabled 00:04:09.540 EAL: No shared files mode enabled, IPC is disabled 00:04:09.540 00:04:09.540 real 0m1.461s 00:04:09.540 user 0m0.801s 00:04:09.540 sys 0m0.523s 00:04:09.540 20:25:09 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.540 ************************************ 00:04:09.540 END TEST env_vtophys 00:04:09.540 ************************************ 00:04:09.540 20:25:09 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:09.540 20:25:09 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:09.540 20:25:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.540 20:25:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.540 20:25:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.540 ************************************ 00:04:09.540 START TEST env_pci 00:04:09.540 ************************************ 00:04:09.540 20:25:09 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:09.540 00:04:09.540 00:04:09.540 CUnit - A unit testing framework for C - Version 2.1-3 00:04:09.540 http://cunit.sourceforge.net/ 00:04:09.540 00:04:09.540 00:04:09.540 Suite: pci 00:04:09.540 Test: pci_hook ...[2024-11-26 20:25:09.863309] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56606 has claimed it 00:04:09.540 passed 00:04:09.540 00:04:09.540 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.540 suites 1 1 n/a 0 0 00:04:09.540 tests 1 1 1 0 0 00:04:09.540 asserts 25 25 25 0 n/a 00:04:09.540 00:04:09.540 Elapsed time = 0.002 seconds 00:04:09.540 EAL: Cannot find device (10000:00:01.0) 00:04:09.540 EAL: Failed to attach device on primary process 00:04:09.540 00:04:09.540 real 0m0.018s 00:04:09.540 user 0m0.012s 00:04:09.540 sys 0m0.006s 00:04:09.540 ************************************ 00:04:09.540 END TEST env_pci 00:04:09.540 ************************************ 00:04:09.540 20:25:09 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.540 20:25:09 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:09.797 20:25:09 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:09.797 20:25:09 env -- env/env.sh@15 -- # uname 00:04:09.797 20:25:09 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:09.797 20:25:09 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:09.797 20:25:09 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:09.797 20:25:09 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:09.797 20:25:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.797 20:25:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.797 ************************************ 00:04:09.797 START TEST env_dpdk_post_init 00:04:09.797 ************************************ 00:04:09.797 20:25:09 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:09.797 EAL: Detected CPU lcores: 10 00:04:09.797 EAL: Detected NUMA nodes: 1 00:04:09.797 EAL: Detected shared linkage of DPDK 00:04:09.797 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:09.797 EAL: Selected IOVA mode 'PA' 00:04:09.797 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:09.797 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:09.797 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:09.797 Starting DPDK initialization... 00:04:09.797 Starting SPDK post initialization... 00:04:09.797 SPDK NVMe probe 00:04:09.797 Attaching to 0000:00:10.0 00:04:09.797 Attaching to 0000:00:11.0 00:04:09.797 Attached to 0000:00:10.0 00:04:09.797 Attached to 0000:00:11.0 00:04:09.797 Cleaning up... 00:04:09.797 00:04:09.797 real 0m0.192s 00:04:09.797 user 0m0.053s 00:04:09.797 sys 0m0.039s 00:04:09.797 20:25:10 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.797 ************************************ 00:04:09.797 END TEST env_dpdk_post_init 00:04:09.797 ************************************ 00:04:09.797 20:25:10 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:10.055 20:25:10 env -- env/env.sh@26 -- # uname 00:04:10.055 20:25:10 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:10.055 20:25:10 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:10.055 20:25:10 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.055 20:25:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.055 20:25:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.055 ************************************ 00:04:10.055 START TEST env_mem_callbacks 00:04:10.055 ************************************ 00:04:10.055 20:25:10 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:10.055 EAL: Detected CPU lcores: 10 00:04:10.055 EAL: Detected NUMA nodes: 1 00:04:10.055 EAL: Detected shared linkage of DPDK 00:04:10.055 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:10.055 EAL: Selected IOVA mode 'PA' 00:04:10.055 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:10.055 00:04:10.055 00:04:10.055 CUnit - A unit testing framework for C - Version 2.1-3 00:04:10.055 http://cunit.sourceforge.net/ 00:04:10.055 00:04:10.055 00:04:10.055 Suite: memory 00:04:10.055 Test: test ... 00:04:10.055 register 0x200000200000 2097152 00:04:10.055 malloc 3145728 00:04:10.055 register 0x200000400000 4194304 00:04:10.055 buf 0x200000500000 len 3145728 PASSED 00:04:10.055 malloc 64 00:04:10.055 buf 0x2000004fff40 len 64 PASSED 00:04:10.055 malloc 4194304 00:04:10.055 register 0x200000800000 6291456 00:04:10.055 buf 0x200000a00000 len 4194304 PASSED 00:04:10.055 free 0x200000500000 3145728 00:04:10.055 free 0x2000004fff40 64 00:04:10.055 unregister 0x200000400000 4194304 PASSED 00:04:10.055 free 0x200000a00000 4194304 00:04:10.055 unregister 0x200000800000 6291456 PASSED 00:04:10.055 malloc 8388608 00:04:10.055 register 0x200000400000 10485760 00:04:10.055 buf 0x200000600000 len 8388608 PASSED 00:04:10.055 free 0x200000600000 8388608 00:04:10.055 unregister 0x200000400000 10485760 PASSED 00:04:10.055 passed 00:04:10.055 00:04:10.055 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.055 suites 1 1 n/a 0 0 00:04:10.055 tests 1 1 1 0 0 00:04:10.055 asserts 15 15 15 0 n/a 00:04:10.055 00:04:10.055 Elapsed time = 0.009 seconds 00:04:10.055 ************************************ 00:04:10.055 END TEST env_mem_callbacks 00:04:10.055 ************************************ 00:04:10.055 00:04:10.055 real 0m0.142s 00:04:10.055 user 0m0.017s 00:04:10.055 sys 0m0.023s 00:04:10.055 20:25:10 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.055 20:25:10 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:10.055 ************************************ 00:04:10.055 END TEST env 00:04:10.055 ************************************ 00:04:10.055 00:04:10.055 real 0m2.497s 00:04:10.055 user 0m1.293s 00:04:10.055 sys 0m0.849s 00:04:10.055 20:25:10 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.055 20:25:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.055 20:25:10 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:10.055 20:25:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.055 20:25:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.055 20:25:10 -- common/autotest_common.sh@10 -- # set +x 00:04:10.055 ************************************ 00:04:10.055 START TEST rpc 00:04:10.055 ************************************ 00:04:10.055 20:25:10 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:10.313 * Looking for test storage... 00:04:10.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:10.313 20:25:10 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:10.313 20:25:10 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:10.313 20:25:10 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:10.313 20:25:10 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:10.313 20:25:10 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.314 20:25:10 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.314 20:25:10 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.314 20:25:10 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.314 20:25:10 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.314 20:25:10 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.314 20:25:10 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.314 20:25:10 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.314 20:25:10 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.314 20:25:10 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.314 20:25:10 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.314 20:25:10 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:10.314 20:25:10 rpc -- scripts/common.sh@345 -- # : 1 00:04:10.314 20:25:10 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.314 20:25:10 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.314 20:25:10 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:10.314 20:25:10 rpc -- scripts/common.sh@353 -- # local d=1 00:04:10.314 20:25:10 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.314 20:25:10 rpc -- scripts/common.sh@355 -- # echo 1 00:04:10.314 20:25:10 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.314 20:25:10 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:10.314 20:25:10 rpc -- scripts/common.sh@353 -- # local d=2 00:04:10.314 20:25:10 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.314 20:25:10 rpc -- scripts/common.sh@355 -- # echo 2 00:04:10.314 20:25:10 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.314 20:25:10 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.314 20:25:10 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.314 20:25:10 rpc -- scripts/common.sh@368 -- # return 0 00:04:10.314 20:25:10 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.314 20:25:10 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:10.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.314 --rc genhtml_branch_coverage=1 00:04:10.314 --rc genhtml_function_coverage=1 00:04:10.314 --rc genhtml_legend=1 00:04:10.314 --rc geninfo_all_blocks=1 00:04:10.314 --rc geninfo_unexecuted_blocks=1 00:04:10.314 00:04:10.314 ' 00:04:10.314 20:25:10 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:10.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.314 --rc genhtml_branch_coverage=1 00:04:10.314 --rc genhtml_function_coverage=1 00:04:10.314 --rc genhtml_legend=1 00:04:10.314 --rc geninfo_all_blocks=1 00:04:10.314 --rc geninfo_unexecuted_blocks=1 00:04:10.314 00:04:10.314 ' 00:04:10.314 20:25:10 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:10.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.314 --rc genhtml_branch_coverage=1 00:04:10.314 --rc genhtml_function_coverage=1 00:04:10.314 --rc genhtml_legend=1 00:04:10.314 --rc geninfo_all_blocks=1 00:04:10.314 --rc geninfo_unexecuted_blocks=1 00:04:10.314 00:04:10.314 ' 00:04:10.314 20:25:10 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:10.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.314 --rc genhtml_branch_coverage=1 00:04:10.314 --rc genhtml_function_coverage=1 00:04:10.314 --rc genhtml_legend=1 00:04:10.314 --rc geninfo_all_blocks=1 00:04:10.314 --rc geninfo_unexecuted_blocks=1 00:04:10.314 00:04:10.314 ' 00:04:10.314 20:25:10 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56724 00:04:10.314 20:25:10 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:10.314 20:25:10 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.314 20:25:10 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56724 00:04:10.314 20:25:10 rpc -- common/autotest_common.sh@835 -- # '[' -z 56724 ']' 00:04:10.314 20:25:10 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.314 20:25:10 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:10.314 20:25:10 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.314 20:25:10 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:10.314 20:25:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.314 [2024-11-26 20:25:10.641927] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:04:10.314 [2024-11-26 20:25:10.642203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56724 ] 00:04:10.572 [2024-11-26 20:25:10.790787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.572 [2024-11-26 20:25:10.864024] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:10.572 [2024-11-26 20:25:10.864327] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56724' to capture a snapshot of events at runtime. 00:04:10.572 [2024-11-26 20:25:10.864600] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:10.572 [2024-11-26 20:25:10.864967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:10.572 [2024-11-26 20:25:10.865150] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56724 for offline analysis/debug. 00:04:10.572 [2024-11-26 20:25:10.866019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.830 [2024-11-26 20:25:10.946315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:10.830 20:25:11 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:10.830 20:25:11 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:10.830 20:25:11 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:10.830 20:25:11 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:10.830 20:25:11 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:10.830 20:25:11 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:10.830 20:25:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.830 20:25:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.830 20:25:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.830 ************************************ 00:04:10.830 START TEST rpc_integrity 00:04:10.830 ************************************ 00:04:10.830 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:10.830 20:25:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:10.830 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.830 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.830 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.830 20:25:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:10.830 20:25:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:11.088 20:25:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:11.088 20:25:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:11.088 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.089 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.089 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.089 20:25:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:11.089 20:25:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:11.089 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.089 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.089 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.089 20:25:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:11.089 { 00:04:11.089 "name": "Malloc0", 00:04:11.089 "aliases": [ 00:04:11.089 "0a579333-649f-402e-a33c-42993c10f265" 00:04:11.089 ], 00:04:11.089 "product_name": "Malloc disk", 00:04:11.089 "block_size": 512, 00:04:11.089 "num_blocks": 16384, 00:04:11.089 "uuid": "0a579333-649f-402e-a33c-42993c10f265", 00:04:11.089 "assigned_rate_limits": { 00:04:11.089 "rw_ios_per_sec": 0, 00:04:11.089 "rw_mbytes_per_sec": 0, 00:04:11.089 "r_mbytes_per_sec": 0, 00:04:11.089 "w_mbytes_per_sec": 0 00:04:11.089 }, 00:04:11.089 "claimed": false, 00:04:11.089 "zoned": false, 00:04:11.089 "supported_io_types": { 00:04:11.089 "read": true, 00:04:11.089 "write": true, 00:04:11.089 "unmap": true, 00:04:11.089 "flush": true, 00:04:11.089 "reset": true, 00:04:11.089 "nvme_admin": false, 00:04:11.089 "nvme_io": false, 00:04:11.089 "nvme_io_md": false, 00:04:11.089 "write_zeroes": true, 00:04:11.089 "zcopy": true, 00:04:11.089 "get_zone_info": false, 00:04:11.089 "zone_management": false, 00:04:11.089 "zone_append": false, 00:04:11.089 "compare": false, 00:04:11.089 "compare_and_write": false, 00:04:11.089 "abort": true, 00:04:11.089 "seek_hole": false, 00:04:11.089 "seek_data": false, 00:04:11.089 "copy": true, 00:04:11.089 "nvme_iov_md": false 00:04:11.089 }, 00:04:11.089 "memory_domains": [ 00:04:11.089 { 00:04:11.089 "dma_device_id": "system", 00:04:11.089 "dma_device_type": 1 00:04:11.089 }, 00:04:11.089 { 00:04:11.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.089 "dma_device_type": 2 00:04:11.089 } 00:04:11.089 ], 00:04:11.089 "driver_specific": {} 00:04:11.089 } 00:04:11.089 ]' 00:04:11.089 20:25:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:11.089 20:25:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:11.089 20:25:11 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:11.089 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.089 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.089 [2024-11-26 20:25:11.317612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:11.089 [2024-11-26 20:25:11.317705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:11.089 [2024-11-26 20:25:11.317727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc0f050 00:04:11.089 [2024-11-26 20:25:11.317738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:11.089 [2024-11-26 20:25:11.319559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:11.089 [2024-11-26 20:25:11.319599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:11.089 Passthru0 00:04:11.089 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.089 20:25:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:11.089 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.089 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.089 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.089 20:25:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:11.089 { 00:04:11.089 "name": "Malloc0", 00:04:11.089 "aliases": [ 00:04:11.089 "0a579333-649f-402e-a33c-42993c10f265" 00:04:11.089 ], 00:04:11.089 "product_name": "Malloc disk", 00:04:11.089 "block_size": 512, 00:04:11.089 "num_blocks": 16384, 00:04:11.089 "uuid": "0a579333-649f-402e-a33c-42993c10f265", 00:04:11.089 "assigned_rate_limits": { 00:04:11.089 "rw_ios_per_sec": 0, 00:04:11.089 "rw_mbytes_per_sec": 0, 00:04:11.089 "r_mbytes_per_sec": 0, 00:04:11.089 "w_mbytes_per_sec": 0 00:04:11.089 }, 00:04:11.089 "claimed": true, 00:04:11.089 "claim_type": "exclusive_write", 00:04:11.089 "zoned": false, 00:04:11.089 "supported_io_types": { 00:04:11.089 "read": true, 00:04:11.089 "write": true, 00:04:11.089 "unmap": true, 00:04:11.089 "flush": true, 00:04:11.089 "reset": true, 00:04:11.089 "nvme_admin": false, 00:04:11.089 "nvme_io": false, 00:04:11.089 "nvme_io_md": false, 00:04:11.089 "write_zeroes": true, 00:04:11.089 "zcopy": true, 00:04:11.089 "get_zone_info": false, 00:04:11.089 "zone_management": false, 00:04:11.089 "zone_append": false, 00:04:11.089 "compare": false, 00:04:11.089 "compare_and_write": false, 00:04:11.089 "abort": true, 00:04:11.089 "seek_hole": false, 00:04:11.089 "seek_data": false, 00:04:11.089 "copy": true, 00:04:11.089 "nvme_iov_md": false 00:04:11.089 }, 00:04:11.089 "memory_domains": [ 00:04:11.089 { 00:04:11.089 "dma_device_id": "system", 00:04:11.089 "dma_device_type": 1 00:04:11.089 }, 00:04:11.089 { 00:04:11.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.089 "dma_device_type": 2 00:04:11.089 } 00:04:11.089 ], 00:04:11.089 "driver_specific": {} 00:04:11.089 }, 00:04:11.089 { 00:04:11.089 "name": "Passthru0", 00:04:11.089 "aliases": [ 00:04:11.089 "4392f431-7ec7-5152-b8e0-fe130f0f3ad5" 00:04:11.089 ], 00:04:11.089 "product_name": "passthru", 00:04:11.089 "block_size": 512, 00:04:11.089 "num_blocks": 16384, 00:04:11.089 "uuid": "4392f431-7ec7-5152-b8e0-fe130f0f3ad5", 00:04:11.089 "assigned_rate_limits": { 00:04:11.089 "rw_ios_per_sec": 0, 00:04:11.089 "rw_mbytes_per_sec": 0, 00:04:11.089 "r_mbytes_per_sec": 0, 00:04:11.089 "w_mbytes_per_sec": 0 00:04:11.089 }, 00:04:11.089 "claimed": false, 00:04:11.089 "zoned": false, 00:04:11.089 "supported_io_types": { 00:04:11.089 "read": true, 00:04:11.089 "write": true, 00:04:11.089 "unmap": true, 00:04:11.089 "flush": true, 00:04:11.089 "reset": true, 00:04:11.089 "nvme_admin": false, 00:04:11.089 "nvme_io": false, 00:04:11.089 "nvme_io_md": false, 00:04:11.089 "write_zeroes": true, 00:04:11.089 "zcopy": true, 00:04:11.089 "get_zone_info": false, 00:04:11.089 "zone_management": false, 00:04:11.089 "zone_append": false, 00:04:11.089 "compare": false, 00:04:11.089 "compare_and_write": false, 00:04:11.089 "abort": true, 00:04:11.089 "seek_hole": false, 00:04:11.089 "seek_data": false, 00:04:11.089 "copy": true, 00:04:11.089 "nvme_iov_md": false 00:04:11.089 }, 00:04:11.089 "memory_domains": [ 00:04:11.089 { 00:04:11.089 "dma_device_id": "system", 00:04:11.089 "dma_device_type": 1 00:04:11.089 }, 00:04:11.089 { 00:04:11.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.089 "dma_device_type": 2 00:04:11.089 } 00:04:11.089 ], 00:04:11.089 "driver_specific": { 00:04:11.089 "passthru": { 00:04:11.089 "name": "Passthru0", 00:04:11.089 "base_bdev_name": "Malloc0" 00:04:11.089 } 00:04:11.089 } 00:04:11.089 } 00:04:11.089 ]' 00:04:11.089 20:25:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:11.089 20:25:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:11.089 20:25:11 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:11.089 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.089 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.089 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.089 20:25:11 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:11.089 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.089 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.089 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.089 20:25:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:11.089 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.089 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.089 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.089 20:25:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:11.089 20:25:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:11.360 20:25:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:11.360 00:04:11.360 real 0m0.312s 00:04:11.360 user 0m0.208s 00:04:11.360 sys 0m0.033s 00:04:11.360 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.360 20:25:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.360 ************************************ 00:04:11.360 END TEST rpc_integrity 00:04:11.360 ************************************ 00:04:11.360 20:25:11 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:11.360 20:25:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.360 20:25:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.360 20:25:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.360 ************************************ 00:04:11.360 START TEST rpc_plugins 00:04:11.360 ************************************ 00:04:11.360 20:25:11 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:11.360 20:25:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:11.360 20:25:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.360 20:25:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.360 20:25:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.360 20:25:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:11.360 20:25:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:11.360 20:25:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.360 20:25:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.360 20:25:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.360 20:25:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:11.360 { 00:04:11.360 "name": "Malloc1", 00:04:11.360 "aliases": [ 00:04:11.360 "31db7f8a-3e94-4968-815a-63c297320a00" 00:04:11.360 ], 00:04:11.360 "product_name": "Malloc disk", 00:04:11.360 "block_size": 4096, 00:04:11.360 "num_blocks": 256, 00:04:11.360 "uuid": "31db7f8a-3e94-4968-815a-63c297320a00", 00:04:11.360 "assigned_rate_limits": { 00:04:11.360 "rw_ios_per_sec": 0, 00:04:11.360 "rw_mbytes_per_sec": 0, 00:04:11.360 "r_mbytes_per_sec": 0, 00:04:11.360 "w_mbytes_per_sec": 0 00:04:11.360 }, 00:04:11.360 "claimed": false, 00:04:11.360 "zoned": false, 00:04:11.360 "supported_io_types": { 00:04:11.360 "read": true, 00:04:11.360 "write": true, 00:04:11.360 "unmap": true, 00:04:11.360 "flush": true, 00:04:11.360 "reset": true, 00:04:11.360 "nvme_admin": false, 00:04:11.360 "nvme_io": false, 00:04:11.360 "nvme_io_md": false, 00:04:11.360 "write_zeroes": true, 00:04:11.360 "zcopy": true, 00:04:11.360 "get_zone_info": false, 00:04:11.360 "zone_management": false, 00:04:11.360 "zone_append": false, 00:04:11.360 "compare": false, 00:04:11.360 "compare_and_write": false, 00:04:11.360 "abort": true, 00:04:11.360 "seek_hole": false, 00:04:11.360 "seek_data": false, 00:04:11.360 "copy": true, 00:04:11.360 "nvme_iov_md": false 00:04:11.360 }, 00:04:11.360 "memory_domains": [ 00:04:11.360 { 00:04:11.360 "dma_device_id": "system", 00:04:11.360 "dma_device_type": 1 00:04:11.360 }, 00:04:11.360 { 00:04:11.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.360 "dma_device_type": 2 00:04:11.360 } 00:04:11.360 ], 00:04:11.360 "driver_specific": {} 00:04:11.360 } 00:04:11.360 ]' 00:04:11.360 20:25:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:11.360 20:25:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:11.360 20:25:11 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:11.360 20:25:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.360 20:25:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.360 20:25:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.360 20:25:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:11.360 20:25:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.360 20:25:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.360 20:25:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.360 20:25:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:11.360 20:25:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:11.360 ************************************ 00:04:11.360 END TEST rpc_plugins 00:04:11.360 ************************************ 00:04:11.360 20:25:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:11.360 00:04:11.360 real 0m0.178s 00:04:11.360 user 0m0.115s 00:04:11.360 sys 0m0.026s 00:04:11.360 20:25:11 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.360 20:25:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.616 20:25:11 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:11.616 20:25:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.616 20:25:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.616 20:25:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.616 ************************************ 00:04:11.616 START TEST rpc_trace_cmd_test 00:04:11.616 ************************************ 00:04:11.616 20:25:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:11.616 20:25:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:11.616 20:25:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:11.616 20:25:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.616 20:25:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:11.616 20:25:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.616 20:25:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:11.616 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56724", 00:04:11.616 "tpoint_group_mask": "0x8", 00:04:11.616 "iscsi_conn": { 00:04:11.616 "mask": "0x2", 00:04:11.616 "tpoint_mask": "0x0" 00:04:11.616 }, 00:04:11.616 "scsi": { 00:04:11.616 "mask": "0x4", 00:04:11.616 "tpoint_mask": "0x0" 00:04:11.616 }, 00:04:11.616 "bdev": { 00:04:11.616 "mask": "0x8", 00:04:11.616 "tpoint_mask": "0xffffffffffffffff" 00:04:11.616 }, 00:04:11.616 "nvmf_rdma": { 00:04:11.616 "mask": "0x10", 00:04:11.616 "tpoint_mask": "0x0" 00:04:11.616 }, 00:04:11.616 "nvmf_tcp": { 00:04:11.616 "mask": "0x20", 00:04:11.616 "tpoint_mask": "0x0" 00:04:11.616 }, 00:04:11.616 "ftl": { 00:04:11.616 "mask": "0x40", 00:04:11.616 "tpoint_mask": "0x0" 00:04:11.616 }, 00:04:11.616 "blobfs": { 00:04:11.616 "mask": "0x80", 00:04:11.616 "tpoint_mask": "0x0" 00:04:11.616 }, 00:04:11.616 "dsa": { 00:04:11.616 "mask": "0x200", 00:04:11.616 "tpoint_mask": "0x0" 00:04:11.616 }, 00:04:11.616 "thread": { 00:04:11.616 "mask": "0x400", 00:04:11.616 "tpoint_mask": "0x0" 00:04:11.616 }, 00:04:11.616 "nvme_pcie": { 00:04:11.616 "mask": "0x800", 00:04:11.616 "tpoint_mask": "0x0" 00:04:11.616 }, 00:04:11.616 "iaa": { 00:04:11.616 "mask": "0x1000", 00:04:11.616 "tpoint_mask": "0x0" 00:04:11.616 }, 00:04:11.616 "nvme_tcp": { 00:04:11.616 "mask": "0x2000", 00:04:11.616 "tpoint_mask": "0x0" 00:04:11.616 }, 00:04:11.616 "bdev_nvme": { 00:04:11.616 "mask": "0x4000", 00:04:11.616 "tpoint_mask": "0x0" 00:04:11.616 }, 00:04:11.616 "sock": { 00:04:11.616 "mask": "0x8000", 00:04:11.616 "tpoint_mask": "0x0" 00:04:11.616 }, 00:04:11.616 "blob": { 00:04:11.616 "mask": "0x10000", 00:04:11.616 "tpoint_mask": "0x0" 00:04:11.616 }, 00:04:11.616 "bdev_raid": { 00:04:11.616 "mask": "0x20000", 00:04:11.616 "tpoint_mask": "0x0" 00:04:11.616 }, 00:04:11.616 "scheduler": { 00:04:11.616 "mask": "0x40000", 00:04:11.616 "tpoint_mask": "0x0" 00:04:11.616 } 00:04:11.616 }' 00:04:11.616 20:25:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:11.616 20:25:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:11.616 20:25:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:11.616 20:25:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:11.616 20:25:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:11.616 20:25:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:11.616 20:25:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:11.873 20:25:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:11.873 20:25:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:11.873 ************************************ 00:04:11.873 END TEST rpc_trace_cmd_test 00:04:11.873 ************************************ 00:04:11.873 20:25:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:11.873 00:04:11.873 real 0m0.282s 00:04:11.873 user 0m0.243s 00:04:11.873 sys 0m0.025s 00:04:11.873 20:25:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.873 20:25:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:11.873 20:25:12 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:11.873 20:25:12 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:11.873 20:25:12 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:11.873 20:25:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.873 20:25:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.873 20:25:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.873 ************************************ 00:04:11.873 START TEST rpc_daemon_integrity 00:04:11.873 ************************************ 00:04:11.873 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:11.873 20:25:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:11.873 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.873 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.873 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.873 20:25:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:11.873 20:25:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:11.873 20:25:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:11.873 20:25:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:11.873 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.873 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.873 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.873 20:25:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:11.873 20:25:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:11.873 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.873 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.873 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.873 20:25:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:11.873 { 00:04:11.873 "name": "Malloc2", 00:04:11.873 "aliases": [ 00:04:11.873 "7bade215-4d33-4dec-8bcc-e0c2a45d8dff" 00:04:11.873 ], 00:04:11.873 "product_name": "Malloc disk", 00:04:11.873 "block_size": 512, 00:04:11.873 "num_blocks": 16384, 00:04:11.873 "uuid": "7bade215-4d33-4dec-8bcc-e0c2a45d8dff", 00:04:11.873 "assigned_rate_limits": { 00:04:11.873 "rw_ios_per_sec": 0, 00:04:11.873 "rw_mbytes_per_sec": 0, 00:04:11.873 "r_mbytes_per_sec": 0, 00:04:11.873 "w_mbytes_per_sec": 0 00:04:11.873 }, 00:04:11.873 "claimed": false, 00:04:11.873 "zoned": false, 00:04:11.873 "supported_io_types": { 00:04:11.873 "read": true, 00:04:11.873 "write": true, 00:04:11.873 "unmap": true, 00:04:11.873 "flush": true, 00:04:11.873 "reset": true, 00:04:11.873 "nvme_admin": false, 00:04:11.873 "nvme_io": false, 00:04:11.873 "nvme_io_md": false, 00:04:11.873 "write_zeroes": true, 00:04:11.873 "zcopy": true, 00:04:11.873 "get_zone_info": false, 00:04:11.873 "zone_management": false, 00:04:11.873 "zone_append": false, 00:04:11.873 "compare": false, 00:04:11.873 "compare_and_write": false, 00:04:11.873 "abort": true, 00:04:11.873 "seek_hole": false, 00:04:11.873 "seek_data": false, 00:04:11.873 "copy": true, 00:04:11.873 "nvme_iov_md": false 00:04:11.873 }, 00:04:11.873 "memory_domains": [ 00:04:11.873 { 00:04:11.873 "dma_device_id": "system", 00:04:11.873 "dma_device_type": 1 00:04:11.873 }, 00:04:11.873 { 00:04:11.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.873 "dma_device_type": 2 00:04:11.873 } 00:04:11.873 ], 00:04:11.873 "driver_specific": {} 00:04:11.873 } 00:04:11.873 ]' 00:04:11.873 20:25:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:11.873 20:25:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:11.873 20:25:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:11.873 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.873 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.873 [2024-11-26 20:25:12.222545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:11.873 [2024-11-26 20:25:12.222599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:11.873 [2024-11-26 20:25:12.222619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc1a030 00:04:11.873 [2024-11-26 20:25:12.222629] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:11.873 [2024-11-26 20:25:12.224210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:11.873 [2024-11-26 20:25:12.224263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:12.130 Passthru0 00:04:12.130 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.130 20:25:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:12.130 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.130 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.130 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.130 20:25:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:12.130 { 00:04:12.130 "name": "Malloc2", 00:04:12.130 "aliases": [ 00:04:12.130 "7bade215-4d33-4dec-8bcc-e0c2a45d8dff" 00:04:12.130 ], 00:04:12.130 "product_name": "Malloc disk", 00:04:12.130 "block_size": 512, 00:04:12.130 "num_blocks": 16384, 00:04:12.130 "uuid": "7bade215-4d33-4dec-8bcc-e0c2a45d8dff", 00:04:12.130 "assigned_rate_limits": { 00:04:12.130 "rw_ios_per_sec": 0, 00:04:12.130 "rw_mbytes_per_sec": 0, 00:04:12.130 "r_mbytes_per_sec": 0, 00:04:12.130 "w_mbytes_per_sec": 0 00:04:12.130 }, 00:04:12.130 "claimed": true, 00:04:12.130 "claim_type": "exclusive_write", 00:04:12.130 "zoned": false, 00:04:12.130 "supported_io_types": { 00:04:12.130 "read": true, 00:04:12.130 "write": true, 00:04:12.130 "unmap": true, 00:04:12.130 "flush": true, 00:04:12.130 "reset": true, 00:04:12.130 "nvme_admin": false, 00:04:12.130 "nvme_io": false, 00:04:12.130 "nvme_io_md": false, 00:04:12.130 "write_zeroes": true, 00:04:12.130 "zcopy": true, 00:04:12.130 "get_zone_info": false, 00:04:12.130 "zone_management": false, 00:04:12.130 "zone_append": false, 00:04:12.130 "compare": false, 00:04:12.130 "compare_and_write": false, 00:04:12.130 "abort": true, 00:04:12.130 "seek_hole": false, 00:04:12.130 "seek_data": false, 00:04:12.130 "copy": true, 00:04:12.130 "nvme_iov_md": false 00:04:12.130 }, 00:04:12.130 "memory_domains": [ 00:04:12.130 { 00:04:12.130 "dma_device_id": "system", 00:04:12.130 "dma_device_type": 1 00:04:12.130 }, 00:04:12.130 { 00:04:12.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.130 "dma_device_type": 2 00:04:12.130 } 00:04:12.130 ], 00:04:12.130 "driver_specific": {} 00:04:12.130 }, 00:04:12.130 { 00:04:12.130 "name": "Passthru0", 00:04:12.130 "aliases": [ 00:04:12.130 "cbb6d6d4-8f4a-579e-9ead-d295a7a48641" 00:04:12.130 ], 00:04:12.130 "product_name": "passthru", 00:04:12.131 "block_size": 512, 00:04:12.131 "num_blocks": 16384, 00:04:12.131 "uuid": "cbb6d6d4-8f4a-579e-9ead-d295a7a48641", 00:04:12.131 "assigned_rate_limits": { 00:04:12.131 "rw_ios_per_sec": 0, 00:04:12.131 "rw_mbytes_per_sec": 0, 00:04:12.131 "r_mbytes_per_sec": 0, 00:04:12.131 "w_mbytes_per_sec": 0 00:04:12.131 }, 00:04:12.131 "claimed": false, 00:04:12.131 "zoned": false, 00:04:12.131 "supported_io_types": { 00:04:12.131 "read": true, 00:04:12.131 "write": true, 00:04:12.131 "unmap": true, 00:04:12.131 "flush": true, 00:04:12.131 "reset": true, 00:04:12.131 "nvme_admin": false, 00:04:12.131 "nvme_io": false, 00:04:12.131 "nvme_io_md": false, 00:04:12.131 "write_zeroes": true, 00:04:12.131 "zcopy": true, 00:04:12.131 "get_zone_info": false, 00:04:12.131 "zone_management": false, 00:04:12.131 "zone_append": false, 00:04:12.131 "compare": false, 00:04:12.131 "compare_and_write": false, 00:04:12.131 "abort": true, 00:04:12.131 "seek_hole": false, 00:04:12.131 "seek_data": false, 00:04:12.131 "copy": true, 00:04:12.131 "nvme_iov_md": false 00:04:12.131 }, 00:04:12.131 "memory_domains": [ 00:04:12.131 { 00:04:12.131 "dma_device_id": "system", 00:04:12.131 "dma_device_type": 1 00:04:12.131 }, 00:04:12.131 { 00:04:12.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.131 "dma_device_type": 2 00:04:12.131 } 00:04:12.131 ], 00:04:12.131 "driver_specific": { 00:04:12.131 "passthru": { 00:04:12.131 "name": "Passthru0", 00:04:12.131 "base_bdev_name": "Malloc2" 00:04:12.131 } 00:04:12.131 } 00:04:12.131 } 00:04:12.131 ]' 00:04:12.131 20:25:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:12.131 20:25:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:12.131 20:25:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:12.131 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.131 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.131 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.131 20:25:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:12.131 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.131 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.131 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.131 20:25:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:12.131 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.131 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.131 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.131 20:25:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:12.131 20:25:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:12.131 20:25:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:12.131 00:04:12.131 real 0m0.309s 00:04:12.131 user 0m0.200s 00:04:12.131 sys 0m0.040s 00:04:12.131 ************************************ 00:04:12.131 END TEST rpc_daemon_integrity 00:04:12.131 ************************************ 00:04:12.131 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.131 20:25:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.131 20:25:12 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:12.131 20:25:12 rpc -- rpc/rpc.sh@84 -- # killprocess 56724 00:04:12.131 20:25:12 rpc -- common/autotest_common.sh@954 -- # '[' -z 56724 ']' 00:04:12.131 20:25:12 rpc -- common/autotest_common.sh@958 -- # kill -0 56724 00:04:12.131 20:25:12 rpc -- common/autotest_common.sh@959 -- # uname 00:04:12.131 20:25:12 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:12.131 20:25:12 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56724 00:04:12.131 killing process with pid 56724 00:04:12.131 20:25:12 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:12.131 20:25:12 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:12.131 20:25:12 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56724' 00:04:12.131 20:25:12 rpc -- common/autotest_common.sh@973 -- # kill 56724 00:04:12.131 20:25:12 rpc -- common/autotest_common.sh@978 -- # wait 56724 00:04:12.698 ************************************ 00:04:12.698 END TEST rpc 00:04:12.698 ************************************ 00:04:12.698 00:04:12.698 real 0m2.436s 00:04:12.698 user 0m3.077s 00:04:12.698 sys 0m0.646s 00:04:12.698 20:25:12 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.698 20:25:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.698 20:25:12 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:12.698 20:25:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.698 20:25:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.698 20:25:12 -- common/autotest_common.sh@10 -- # set +x 00:04:12.698 ************************************ 00:04:12.698 START TEST skip_rpc 00:04:12.698 ************************************ 00:04:12.698 20:25:12 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:12.698 * Looking for test storage... 00:04:12.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:12.698 20:25:12 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:12.698 20:25:12 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:12.698 20:25:12 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:12.955 20:25:13 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:12.955 20:25:13 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.955 20:25:13 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.955 20:25:13 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.955 20:25:13 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.955 20:25:13 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.955 20:25:13 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.956 20:25:13 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.956 20:25:13 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.956 20:25:13 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.956 20:25:13 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.956 20:25:13 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.956 20:25:13 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:12.956 20:25:13 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:12.956 20:25:13 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.956 20:25:13 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.956 20:25:13 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:12.956 20:25:13 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:12.956 20:25:13 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.956 20:25:13 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:12.956 20:25:13 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.956 20:25:13 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:12.956 20:25:13 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:12.956 20:25:13 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.956 20:25:13 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:12.956 20:25:13 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.956 20:25:13 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.956 20:25:13 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.956 20:25:13 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:12.956 20:25:13 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.956 20:25:13 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:12.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.956 --rc genhtml_branch_coverage=1 00:04:12.956 --rc genhtml_function_coverage=1 00:04:12.956 --rc genhtml_legend=1 00:04:12.956 --rc geninfo_all_blocks=1 00:04:12.956 --rc geninfo_unexecuted_blocks=1 00:04:12.956 00:04:12.956 ' 00:04:12.956 20:25:13 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:12.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.956 --rc genhtml_branch_coverage=1 00:04:12.956 --rc genhtml_function_coverage=1 00:04:12.956 --rc genhtml_legend=1 00:04:12.956 --rc geninfo_all_blocks=1 00:04:12.956 --rc geninfo_unexecuted_blocks=1 00:04:12.956 00:04:12.956 ' 00:04:12.956 20:25:13 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:12.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.956 --rc genhtml_branch_coverage=1 00:04:12.956 --rc genhtml_function_coverage=1 00:04:12.956 --rc genhtml_legend=1 00:04:12.956 --rc geninfo_all_blocks=1 00:04:12.956 --rc geninfo_unexecuted_blocks=1 00:04:12.956 00:04:12.956 ' 00:04:12.956 20:25:13 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:12.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.956 --rc genhtml_branch_coverage=1 00:04:12.956 --rc genhtml_function_coverage=1 00:04:12.956 --rc genhtml_legend=1 00:04:12.956 --rc geninfo_all_blocks=1 00:04:12.956 --rc geninfo_unexecuted_blocks=1 00:04:12.956 00:04:12.956 ' 00:04:12.956 20:25:13 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:12.956 20:25:13 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:12.956 20:25:13 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:12.956 20:25:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.956 20:25:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.956 20:25:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.956 ************************************ 00:04:12.956 START TEST skip_rpc 00:04:12.956 ************************************ 00:04:12.956 20:25:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:12.956 20:25:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56922 00:04:12.956 20:25:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:12.956 20:25:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:12.956 20:25:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:12.956 [2024-11-26 20:25:13.145734] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:04:12.956 [2024-11-26 20:25:13.146037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56922 ] 00:04:12.956 [2024-11-26 20:25:13.298791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.214 [2024-11-26 20:25:13.370411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.214 [2024-11-26 20:25:13.448524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56922 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56922 ']' 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56922 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56922 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56922' 00:04:18.514 killing process with pid 56922 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56922 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56922 00:04:18.514 00:04:18.514 real 0m5.430s 00:04:18.514 user 0m5.059s 00:04:18.514 sys 0m0.277s 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.514 20:25:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.514 ************************************ 00:04:18.514 END TEST skip_rpc 00:04:18.514 ************************************ 00:04:18.514 20:25:18 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:18.514 20:25:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.514 20:25:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.514 20:25:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.514 ************************************ 00:04:18.514 START TEST skip_rpc_with_json 00:04:18.514 ************************************ 00:04:18.514 20:25:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:18.514 20:25:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:18.514 20:25:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57003 00:04:18.514 20:25:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:18.514 20:25:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.514 20:25:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57003 00:04:18.514 20:25:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57003 ']' 00:04:18.514 20:25:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.514 20:25:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:18.514 20:25:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.514 20:25:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:18.514 20:25:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.514 [2024-11-26 20:25:18.627962] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:04:18.514 [2024-11-26 20:25:18.628287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57003 ] 00:04:18.514 [2024-11-26 20:25:18.774585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.514 [2024-11-26 20:25:18.839386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.773 [2024-11-26 20:25:18.911248] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:18.773 20:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:18.773 20:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:18.773 20:25:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:18.773 20:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.773 20:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.773 [2024-11-26 20:25:19.122209] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:19.030 request: 00:04:19.030 { 00:04:19.030 "trtype": "tcp", 00:04:19.030 "method": "nvmf_get_transports", 00:04:19.030 "req_id": 1 00:04:19.030 } 00:04:19.030 Got JSON-RPC error response 00:04:19.030 response: 00:04:19.030 { 00:04:19.030 "code": -19, 00:04:19.030 "message": "No such device" 00:04:19.030 } 00:04:19.030 20:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:19.030 20:25:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:19.030 20:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.030 20:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:19.030 [2024-11-26 20:25:19.134321] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:19.030 20:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.030 20:25:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:19.030 20:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.030 20:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:19.030 20:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.030 20:25:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:19.030 { 00:04:19.030 "subsystems": [ 00:04:19.030 { 00:04:19.030 "subsystem": "fsdev", 00:04:19.030 "config": [ 00:04:19.030 { 00:04:19.030 "method": "fsdev_set_opts", 00:04:19.030 "params": { 00:04:19.030 "fsdev_io_pool_size": 65535, 00:04:19.030 "fsdev_io_cache_size": 256 00:04:19.030 } 00:04:19.030 } 00:04:19.030 ] 00:04:19.030 }, 00:04:19.030 { 00:04:19.030 "subsystem": "keyring", 00:04:19.030 "config": [] 00:04:19.030 }, 00:04:19.030 { 00:04:19.030 "subsystem": "iobuf", 00:04:19.030 "config": [ 00:04:19.030 { 00:04:19.030 "method": "iobuf_set_options", 00:04:19.030 "params": { 00:04:19.030 "small_pool_count": 8192, 00:04:19.030 "large_pool_count": 1024, 00:04:19.030 "small_bufsize": 8192, 00:04:19.030 "large_bufsize": 135168, 00:04:19.030 "enable_numa": false 00:04:19.030 } 00:04:19.030 } 00:04:19.030 ] 00:04:19.030 }, 00:04:19.030 { 00:04:19.030 "subsystem": "sock", 00:04:19.030 "config": [ 00:04:19.030 { 00:04:19.030 "method": "sock_set_default_impl", 00:04:19.030 "params": { 00:04:19.030 "impl_name": "uring" 00:04:19.030 } 00:04:19.030 }, 00:04:19.030 { 00:04:19.030 "method": "sock_impl_set_options", 00:04:19.030 "params": { 00:04:19.030 "impl_name": "ssl", 00:04:19.030 "recv_buf_size": 4096, 00:04:19.030 "send_buf_size": 4096, 00:04:19.030 "enable_recv_pipe": true, 00:04:19.030 "enable_quickack": false, 00:04:19.030 "enable_placement_id": 0, 00:04:19.030 "enable_zerocopy_send_server": true, 00:04:19.030 "enable_zerocopy_send_client": false, 00:04:19.030 "zerocopy_threshold": 0, 00:04:19.030 "tls_version": 0, 00:04:19.030 "enable_ktls": false 00:04:19.030 } 00:04:19.030 }, 00:04:19.030 { 00:04:19.030 "method": "sock_impl_set_options", 00:04:19.030 "params": { 00:04:19.030 "impl_name": "posix", 00:04:19.030 "recv_buf_size": 2097152, 00:04:19.030 "send_buf_size": 2097152, 00:04:19.030 "enable_recv_pipe": true, 00:04:19.030 "enable_quickack": false, 00:04:19.030 "enable_placement_id": 0, 00:04:19.030 "enable_zerocopy_send_server": true, 00:04:19.030 "enable_zerocopy_send_client": false, 00:04:19.030 "zerocopy_threshold": 0, 00:04:19.030 "tls_version": 0, 00:04:19.030 "enable_ktls": false 00:04:19.030 } 00:04:19.030 }, 00:04:19.030 { 00:04:19.030 "method": "sock_impl_set_options", 00:04:19.030 "params": { 00:04:19.030 "impl_name": "uring", 00:04:19.030 "recv_buf_size": 2097152, 00:04:19.030 "send_buf_size": 2097152, 00:04:19.030 "enable_recv_pipe": true, 00:04:19.030 "enable_quickack": false, 00:04:19.030 "enable_placement_id": 0, 00:04:19.030 "enable_zerocopy_send_server": false, 00:04:19.030 "enable_zerocopy_send_client": false, 00:04:19.030 "zerocopy_threshold": 0, 00:04:19.030 "tls_version": 0, 00:04:19.030 "enable_ktls": false 00:04:19.030 } 00:04:19.030 } 00:04:19.030 ] 00:04:19.030 }, 00:04:19.030 { 00:04:19.030 "subsystem": "vmd", 00:04:19.030 "config": [] 00:04:19.030 }, 00:04:19.030 { 00:04:19.030 "subsystem": "accel", 00:04:19.030 "config": [ 00:04:19.030 { 00:04:19.030 "method": "accel_set_options", 00:04:19.030 "params": { 00:04:19.030 "small_cache_size": 128, 00:04:19.030 "large_cache_size": 16, 00:04:19.030 "task_count": 2048, 00:04:19.030 "sequence_count": 2048, 00:04:19.030 "buf_count": 2048 00:04:19.030 } 00:04:19.030 } 00:04:19.030 ] 00:04:19.030 }, 00:04:19.030 { 00:04:19.030 "subsystem": "bdev", 00:04:19.030 "config": [ 00:04:19.030 { 00:04:19.030 "method": "bdev_set_options", 00:04:19.030 "params": { 00:04:19.030 "bdev_io_pool_size": 65535, 00:04:19.030 "bdev_io_cache_size": 256, 00:04:19.030 "bdev_auto_examine": true, 00:04:19.030 "iobuf_small_cache_size": 128, 00:04:19.030 "iobuf_large_cache_size": 16 00:04:19.030 } 00:04:19.030 }, 00:04:19.030 { 00:04:19.030 "method": "bdev_raid_set_options", 00:04:19.030 "params": { 00:04:19.030 "process_window_size_kb": 1024, 00:04:19.030 "process_max_bandwidth_mb_sec": 0 00:04:19.030 } 00:04:19.030 }, 00:04:19.030 { 00:04:19.030 "method": "bdev_iscsi_set_options", 00:04:19.030 "params": { 00:04:19.030 "timeout_sec": 30 00:04:19.030 } 00:04:19.030 }, 00:04:19.030 { 00:04:19.030 "method": "bdev_nvme_set_options", 00:04:19.030 "params": { 00:04:19.030 "action_on_timeout": "none", 00:04:19.030 "timeout_us": 0, 00:04:19.030 "timeout_admin_us": 0, 00:04:19.030 "keep_alive_timeout_ms": 10000, 00:04:19.030 "arbitration_burst": 0, 00:04:19.030 "low_priority_weight": 0, 00:04:19.030 "medium_priority_weight": 0, 00:04:19.030 "high_priority_weight": 0, 00:04:19.030 "nvme_adminq_poll_period_us": 10000, 00:04:19.030 "nvme_ioq_poll_period_us": 0, 00:04:19.030 "io_queue_requests": 0, 00:04:19.030 "delay_cmd_submit": true, 00:04:19.030 "transport_retry_count": 4, 00:04:19.030 "bdev_retry_count": 3, 00:04:19.030 "transport_ack_timeout": 0, 00:04:19.030 "ctrlr_loss_timeout_sec": 0, 00:04:19.030 "reconnect_delay_sec": 0, 00:04:19.030 "fast_io_fail_timeout_sec": 0, 00:04:19.030 "disable_auto_failback": false, 00:04:19.030 "generate_uuids": false, 00:04:19.030 "transport_tos": 0, 00:04:19.030 "nvme_error_stat": false, 00:04:19.030 "rdma_srq_size": 0, 00:04:19.030 "io_path_stat": false, 00:04:19.030 "allow_accel_sequence": false, 00:04:19.030 "rdma_max_cq_size": 0, 00:04:19.030 "rdma_cm_event_timeout_ms": 0, 00:04:19.030 "dhchap_digests": [ 00:04:19.030 "sha256", 00:04:19.030 "sha384", 00:04:19.030 "sha512" 00:04:19.030 ], 00:04:19.030 "dhchap_dhgroups": [ 00:04:19.030 "null", 00:04:19.030 "ffdhe2048", 00:04:19.030 "ffdhe3072", 00:04:19.030 "ffdhe4096", 00:04:19.030 "ffdhe6144", 00:04:19.030 "ffdhe8192" 00:04:19.030 ] 00:04:19.030 } 00:04:19.030 }, 00:04:19.030 { 00:04:19.030 "method": "bdev_nvme_set_hotplug", 00:04:19.030 "params": { 00:04:19.030 "period_us": 100000, 00:04:19.030 "enable": false 00:04:19.031 } 00:04:19.031 }, 00:04:19.031 { 00:04:19.031 "method": "bdev_wait_for_examine" 00:04:19.031 } 00:04:19.031 ] 00:04:19.031 }, 00:04:19.031 { 00:04:19.031 "subsystem": "scsi", 00:04:19.031 "config": null 00:04:19.031 }, 00:04:19.031 { 00:04:19.031 "subsystem": "scheduler", 00:04:19.031 "config": [ 00:04:19.031 { 00:04:19.031 "method": "framework_set_scheduler", 00:04:19.031 "params": { 00:04:19.031 "name": "static" 00:04:19.031 } 00:04:19.031 } 00:04:19.031 ] 00:04:19.031 }, 00:04:19.031 { 00:04:19.031 "subsystem": "vhost_scsi", 00:04:19.031 "config": [] 00:04:19.031 }, 00:04:19.031 { 00:04:19.031 "subsystem": "vhost_blk", 00:04:19.031 "config": [] 00:04:19.031 }, 00:04:19.031 { 00:04:19.031 "subsystem": "ublk", 00:04:19.031 "config": [] 00:04:19.031 }, 00:04:19.031 { 00:04:19.031 "subsystem": "nbd", 00:04:19.031 "config": [] 00:04:19.031 }, 00:04:19.031 { 00:04:19.031 "subsystem": "nvmf", 00:04:19.031 "config": [ 00:04:19.031 { 00:04:19.031 "method": "nvmf_set_config", 00:04:19.031 "params": { 00:04:19.031 "discovery_filter": "match_any", 00:04:19.031 "admin_cmd_passthru": { 00:04:19.031 "identify_ctrlr": false 00:04:19.031 }, 00:04:19.031 "dhchap_digests": [ 00:04:19.031 "sha256", 00:04:19.031 "sha384", 00:04:19.031 "sha512" 00:04:19.031 ], 00:04:19.031 "dhchap_dhgroups": [ 00:04:19.031 "null", 00:04:19.031 "ffdhe2048", 00:04:19.031 "ffdhe3072", 00:04:19.031 "ffdhe4096", 00:04:19.031 "ffdhe6144", 00:04:19.031 "ffdhe8192" 00:04:19.031 ] 00:04:19.031 } 00:04:19.031 }, 00:04:19.031 { 00:04:19.031 "method": "nvmf_set_max_subsystems", 00:04:19.031 "params": { 00:04:19.031 "max_subsystems": 1024 00:04:19.031 } 00:04:19.031 }, 00:04:19.031 { 00:04:19.031 "method": "nvmf_set_crdt", 00:04:19.031 "params": { 00:04:19.031 "crdt1": 0, 00:04:19.031 "crdt2": 0, 00:04:19.031 "crdt3": 0 00:04:19.031 } 00:04:19.031 }, 00:04:19.031 { 00:04:19.031 "method": "nvmf_create_transport", 00:04:19.031 "params": { 00:04:19.031 "trtype": "TCP", 00:04:19.031 "max_queue_depth": 128, 00:04:19.031 "max_io_qpairs_per_ctrlr": 127, 00:04:19.031 "in_capsule_data_size": 4096, 00:04:19.031 "max_io_size": 131072, 00:04:19.031 "io_unit_size": 131072, 00:04:19.031 "max_aq_depth": 128, 00:04:19.031 "num_shared_buffers": 511, 00:04:19.031 "buf_cache_size": 4294967295, 00:04:19.031 "dif_insert_or_strip": false, 00:04:19.031 "zcopy": false, 00:04:19.031 "c2h_success": true, 00:04:19.031 "sock_priority": 0, 00:04:19.031 "abort_timeout_sec": 1, 00:04:19.031 "ack_timeout": 0, 00:04:19.031 "data_wr_pool_size": 0 00:04:19.031 } 00:04:19.031 } 00:04:19.031 ] 00:04:19.031 }, 00:04:19.031 { 00:04:19.031 "subsystem": "iscsi", 00:04:19.031 "config": [ 00:04:19.031 { 00:04:19.031 "method": "iscsi_set_options", 00:04:19.031 "params": { 00:04:19.031 "node_base": "iqn.2016-06.io.spdk", 00:04:19.031 "max_sessions": 128, 00:04:19.031 "max_connections_per_session": 2, 00:04:19.031 "max_queue_depth": 64, 00:04:19.031 "default_time2wait": 2, 00:04:19.031 "default_time2retain": 20, 00:04:19.031 "first_burst_length": 8192, 00:04:19.031 "immediate_data": true, 00:04:19.031 "allow_duplicated_isid": false, 00:04:19.031 "error_recovery_level": 0, 00:04:19.031 "nop_timeout": 60, 00:04:19.031 "nop_in_interval": 30, 00:04:19.031 "disable_chap": false, 00:04:19.031 "require_chap": false, 00:04:19.031 "mutual_chap": false, 00:04:19.031 "chap_group": 0, 00:04:19.031 "max_large_datain_per_connection": 64, 00:04:19.031 "max_r2t_per_connection": 4, 00:04:19.031 "pdu_pool_size": 36864, 00:04:19.031 "immediate_data_pool_size": 16384, 00:04:19.031 "data_out_pool_size": 2048 00:04:19.031 } 00:04:19.031 } 00:04:19.031 ] 00:04:19.031 } 00:04:19.031 ] 00:04:19.031 } 00:04:19.031 20:25:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:19.031 20:25:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57003 00:04:19.031 20:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57003 ']' 00:04:19.031 20:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57003 00:04:19.031 20:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:19.031 20:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:19.031 20:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57003 00:04:19.031 killing process with pid 57003 00:04:19.031 20:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:19.031 20:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:19.031 20:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57003' 00:04:19.031 20:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57003 00:04:19.031 20:25:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57003 00:04:19.595 20:25:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57029 00:04:19.595 20:25:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:19.595 20:25:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:24.860 20:25:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57029 00:04:24.860 20:25:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57029 ']' 00:04:24.860 20:25:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57029 00:04:24.860 20:25:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:24.860 20:25:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:24.860 20:25:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57029 00:04:24.860 killing process with pid 57029 00:04:24.860 20:25:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:24.860 20:25:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:24.860 20:25:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57029' 00:04:24.860 20:25:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57029 00:04:24.860 20:25:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57029 00:04:24.860 20:25:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:24.860 20:25:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:24.860 ************************************ 00:04:24.860 END TEST skip_rpc_with_json 00:04:24.860 ************************************ 00:04:24.860 00:04:24.860 real 0m6.595s 00:04:24.860 user 0m6.149s 00:04:24.860 sys 0m0.608s 00:04:24.860 20:25:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.860 20:25:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:24.860 20:25:25 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:24.860 20:25:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.860 20:25:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.860 20:25:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.119 ************************************ 00:04:25.119 START TEST skip_rpc_with_delay 00:04:25.119 ************************************ 00:04:25.119 20:25:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:25.119 20:25:25 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:25.119 20:25:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:25.119 20:25:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:25.119 20:25:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.119 20:25:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:25.119 20:25:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.119 20:25:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:25.119 20:25:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.119 20:25:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:25.119 20:25:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.119 20:25:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:25.119 20:25:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:25.119 [2024-11-26 20:25:25.278884] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:25.119 ************************************ 00:04:25.119 END TEST skip_rpc_with_delay 00:04:25.119 ************************************ 00:04:25.119 20:25:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:25.119 20:25:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:25.119 20:25:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:25.119 20:25:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:25.119 00:04:25.119 real 0m0.081s 00:04:25.119 user 0m0.053s 00:04:25.119 sys 0m0.027s 00:04:25.119 20:25:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.119 20:25:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:25.119 20:25:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:25.119 20:25:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:25.119 20:25:25 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:25.119 20:25:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.119 20:25:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.119 20:25:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.119 ************************************ 00:04:25.119 START TEST exit_on_failed_rpc_init 00:04:25.119 ************************************ 00:04:25.119 20:25:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:25.119 20:25:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57137 00:04:25.119 20:25:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57137 00:04:25.119 20:25:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:25.119 20:25:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57137 ']' 00:04:25.119 20:25:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.119 20:25:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.119 20:25:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.119 20:25:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.119 20:25:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:25.119 [2024-11-26 20:25:25.411083] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:04:25.119 [2024-11-26 20:25:25.411180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57137 ] 00:04:25.377 [2024-11-26 20:25:25.553667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.377 [2024-11-26 20:25:25.615611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.377 [2024-11-26 20:25:25.689706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:25.638 20:25:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.638 20:25:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:25.638 20:25:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.638 20:25:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:25.638 20:25:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:25.638 20:25:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:25.638 20:25:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.638 20:25:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:25.638 20:25:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.638 20:25:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:25.638 20:25:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.638 20:25:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:25.638 20:25:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.638 20:25:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:25.638 20:25:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:25.638 [2024-11-26 20:25:25.976918] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:04:25.638 [2024-11-26 20:25:25.977059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57149 ] 00:04:25.897 [2024-11-26 20:25:26.134758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.897 [2024-11-26 20:25:26.212231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.898 [2024-11-26 20:25:26.212361] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:25.898 [2024-11-26 20:25:26.212380] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:25.898 [2024-11-26 20:25:26.212393] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:26.157 20:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:26.157 20:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:26.157 20:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:26.157 20:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:26.157 20:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:26.157 20:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:26.157 20:25:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:26.157 20:25:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57137 00:04:26.157 20:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57137 ']' 00:04:26.157 20:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57137 00:04:26.157 20:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:26.157 20:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:26.157 20:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57137 00:04:26.157 20:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:26.157 killing process with pid 57137 00:04:26.157 20:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:26.157 20:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57137' 00:04:26.157 20:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57137 00:04:26.157 20:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57137 00:04:26.416 00:04:26.416 real 0m1.391s 00:04:26.417 user 0m1.551s 00:04:26.417 sys 0m0.404s 00:04:26.417 20:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.417 20:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:26.417 ************************************ 00:04:26.417 END TEST exit_on_failed_rpc_init 00:04:26.417 ************************************ 00:04:26.678 20:25:26 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:26.678 00:04:26.678 real 0m13.896s 00:04:26.678 user 0m13.011s 00:04:26.678 sys 0m1.509s 00:04:26.678 20:25:26 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.678 20:25:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.678 ************************************ 00:04:26.678 END TEST skip_rpc 00:04:26.678 ************************************ 00:04:26.678 20:25:26 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:26.678 20:25:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.678 20:25:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.678 20:25:26 -- common/autotest_common.sh@10 -- # set +x 00:04:26.678 ************************************ 00:04:26.678 START TEST rpc_client 00:04:26.678 ************************************ 00:04:26.678 20:25:26 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:26.678 * Looking for test storage... 00:04:26.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:26.678 20:25:26 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:26.678 20:25:26 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:26.678 20:25:26 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:26.678 20:25:27 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.678 20:25:27 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:26.678 20:25:27 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.678 20:25:27 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:26.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.678 --rc genhtml_branch_coverage=1 00:04:26.678 --rc genhtml_function_coverage=1 00:04:26.678 --rc genhtml_legend=1 00:04:26.678 --rc geninfo_all_blocks=1 00:04:26.678 --rc geninfo_unexecuted_blocks=1 00:04:26.678 00:04:26.678 ' 00:04:26.678 20:25:27 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:26.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.678 --rc genhtml_branch_coverage=1 00:04:26.678 --rc genhtml_function_coverage=1 00:04:26.678 --rc genhtml_legend=1 00:04:26.678 --rc geninfo_all_blocks=1 00:04:26.678 --rc geninfo_unexecuted_blocks=1 00:04:26.678 00:04:26.678 ' 00:04:26.678 20:25:27 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:26.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.678 --rc genhtml_branch_coverage=1 00:04:26.678 --rc genhtml_function_coverage=1 00:04:26.678 --rc genhtml_legend=1 00:04:26.678 --rc geninfo_all_blocks=1 00:04:26.678 --rc geninfo_unexecuted_blocks=1 00:04:26.678 00:04:26.678 ' 00:04:26.678 20:25:27 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:26.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.678 --rc genhtml_branch_coverage=1 00:04:26.679 --rc genhtml_function_coverage=1 00:04:26.679 --rc genhtml_legend=1 00:04:26.679 --rc geninfo_all_blocks=1 00:04:26.679 --rc geninfo_unexecuted_blocks=1 00:04:26.679 00:04:26.679 ' 00:04:26.679 20:25:27 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:26.938 OK 00:04:26.938 20:25:27 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:26.938 00:04:26.938 real 0m0.207s 00:04:26.938 user 0m0.133s 00:04:26.938 sys 0m0.083s 00:04:26.938 20:25:27 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.938 20:25:27 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:26.938 ************************************ 00:04:26.938 END TEST rpc_client 00:04:26.938 ************************************ 00:04:26.938 20:25:27 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:26.938 20:25:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.938 20:25:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.938 20:25:27 -- common/autotest_common.sh@10 -- # set +x 00:04:26.938 ************************************ 00:04:26.938 START TEST json_config 00:04:26.938 ************************************ 00:04:26.938 20:25:27 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:26.938 20:25:27 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:26.938 20:25:27 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:26.938 20:25:27 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:26.938 20:25:27 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:26.938 20:25:27 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.938 20:25:27 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.938 20:25:27 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.938 20:25:27 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.938 20:25:27 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.938 20:25:27 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.938 20:25:27 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.938 20:25:27 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.938 20:25:27 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.938 20:25:27 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.938 20:25:27 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.938 20:25:27 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:26.938 20:25:27 json_config -- scripts/common.sh@345 -- # : 1 00:04:26.938 20:25:27 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.938 20:25:27 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.938 20:25:27 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:26.938 20:25:27 json_config -- scripts/common.sh@353 -- # local d=1 00:04:26.938 20:25:27 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.938 20:25:27 json_config -- scripts/common.sh@355 -- # echo 1 00:04:26.938 20:25:27 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.938 20:25:27 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:26.938 20:25:27 json_config -- scripts/common.sh@353 -- # local d=2 00:04:26.938 20:25:27 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.938 20:25:27 json_config -- scripts/common.sh@355 -- # echo 2 00:04:26.938 20:25:27 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.938 20:25:27 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.938 20:25:27 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.938 20:25:27 json_config -- scripts/common.sh@368 -- # return 0 00:04:26.938 20:25:27 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.938 20:25:27 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:26.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.938 --rc genhtml_branch_coverage=1 00:04:26.939 --rc genhtml_function_coverage=1 00:04:26.939 --rc genhtml_legend=1 00:04:26.939 --rc geninfo_all_blocks=1 00:04:26.939 --rc geninfo_unexecuted_blocks=1 00:04:26.939 00:04:26.939 ' 00:04:26.939 20:25:27 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:26.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.939 --rc genhtml_branch_coverage=1 00:04:26.939 --rc genhtml_function_coverage=1 00:04:26.939 --rc genhtml_legend=1 00:04:26.939 --rc geninfo_all_blocks=1 00:04:26.939 --rc geninfo_unexecuted_blocks=1 00:04:26.939 00:04:26.939 ' 00:04:26.939 20:25:27 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:26.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.939 --rc genhtml_branch_coverage=1 00:04:26.939 --rc genhtml_function_coverage=1 00:04:26.939 --rc genhtml_legend=1 00:04:26.939 --rc geninfo_all_blocks=1 00:04:26.939 --rc geninfo_unexecuted_blocks=1 00:04:26.939 00:04:26.939 ' 00:04:26.939 20:25:27 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:26.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.939 --rc genhtml_branch_coverage=1 00:04:26.939 --rc genhtml_function_coverage=1 00:04:26.939 --rc genhtml_legend=1 00:04:26.939 --rc geninfo_all_blocks=1 00:04:26.939 --rc geninfo_unexecuted_blocks=1 00:04:26.939 00:04:26.939 ' 00:04:26.939 20:25:27 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:26.939 20:25:27 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:26.939 20:25:27 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:26.939 20:25:27 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:26.939 20:25:27 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:26.939 20:25:27 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.939 20:25:27 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.939 20:25:27 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.939 20:25:27 json_config -- paths/export.sh@5 -- # export PATH 00:04:26.939 20:25:27 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@51 -- # : 0 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:26.939 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:26.939 20:25:27 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:26.939 20:25:27 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:26.939 20:25:27 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:26.939 20:25:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:26.939 20:25:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:26.939 20:25:27 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:26.939 20:25:27 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:26.939 20:25:27 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:26.939 20:25:27 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:26.939 20:25:27 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:26.939 20:25:27 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:26.939 20:25:27 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:26.939 20:25:27 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:26.939 20:25:27 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:26.939 20:25:27 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:26.939 20:25:27 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:26.939 INFO: JSON configuration test init 00:04:26.939 20:25:27 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:26.939 20:25:27 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:26.939 20:25:27 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:26.939 20:25:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.939 20:25:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.939 20:25:27 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:26.939 20:25:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.939 20:25:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.198 20:25:27 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:27.199 20:25:27 json_config -- json_config/common.sh@9 -- # local app=target 00:04:27.199 20:25:27 json_config -- json_config/common.sh@10 -- # shift 00:04:27.199 20:25:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:27.199 20:25:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:27.199 20:25:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:27.199 20:25:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.199 20:25:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.199 20:25:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57283 00:04:27.199 Waiting for target to run... 00:04:27.199 20:25:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:27.199 20:25:27 json_config -- json_config/common.sh@25 -- # waitforlisten 57283 /var/tmp/spdk_tgt.sock 00:04:27.199 20:25:27 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:27.199 20:25:27 json_config -- common/autotest_common.sh@835 -- # '[' -z 57283 ']' 00:04:27.199 20:25:27 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:27.199 20:25:27 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:27.199 20:25:27 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:27.199 20:25:27 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.199 20:25:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.199 [2024-11-26 20:25:27.359046] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:04:27.199 [2024-11-26 20:25:27.359157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57283 ] 00:04:27.766 [2024-11-26 20:25:27.810679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.766 [2024-11-26 20:25:27.871258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.334 20:25:28 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.334 00:04:28.334 20:25:28 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:28.334 20:25:28 json_config -- json_config/common.sh@26 -- # echo '' 00:04:28.334 20:25:28 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:28.334 20:25:28 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:28.334 20:25:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.334 20:25:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.334 20:25:28 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:28.334 20:25:28 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:28.334 20:25:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:28.334 20:25:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.334 20:25:28 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:28.334 20:25:28 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:28.334 20:25:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:28.593 [2024-11-26 20:25:28.872761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:28.851 20:25:29 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:28.851 20:25:29 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:28.851 20:25:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.851 20:25:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.851 20:25:29 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:28.851 20:25:29 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:28.851 20:25:29 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:28.851 20:25:29 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:28.851 20:25:29 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:28.851 20:25:29 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:28.851 20:25:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:28.851 20:25:29 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:29.108 20:25:29 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:29.108 20:25:29 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:29.108 20:25:29 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:29.108 20:25:29 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:29.108 20:25:29 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:29.108 20:25:29 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:29.108 20:25:29 json_config -- json_config/json_config.sh@54 -- # sort 00:04:29.108 20:25:29 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:29.108 20:25:29 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:29.108 20:25:29 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:29.108 20:25:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:29.108 20:25:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.108 20:25:29 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:29.108 20:25:29 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:29.108 20:25:29 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:29.108 20:25:29 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:29.108 20:25:29 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:29.108 20:25:29 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:29.108 20:25:29 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:29.108 20:25:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.108 20:25:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.108 20:25:29 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:29.108 20:25:29 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:29.108 20:25:29 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:29.108 20:25:29 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:29.108 20:25:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:29.675 MallocForNvmf0 00:04:29.675 20:25:29 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:29.675 20:25:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:29.934 MallocForNvmf1 00:04:29.934 20:25:30 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:29.934 20:25:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:30.192 [2024-11-26 20:25:30.319165] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.192 20:25:30 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:30.192 20:25:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:30.452 20:25:30 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:30.452 20:25:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:30.710 20:25:30 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:30.710 20:25:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:30.968 20:25:31 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:30.968 20:25:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:31.227 [2024-11-26 20:25:31.415863] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:31.227 20:25:31 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:31.227 20:25:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.227 20:25:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.227 20:25:31 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:31.227 20:25:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.227 20:25:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.227 20:25:31 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:31.227 20:25:31 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.227 20:25:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.486 MallocBdevForConfigChangeCheck 00:04:31.486 20:25:31 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:31.486 20:25:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.486 20:25:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.745 20:25:31 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:31.745 20:25:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:32.004 INFO: shutting down applications... 00:04:32.004 20:25:32 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:32.004 20:25:32 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:32.004 20:25:32 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:32.004 20:25:32 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:32.004 20:25:32 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:32.262 Calling clear_iscsi_subsystem 00:04:32.262 Calling clear_nvmf_subsystem 00:04:32.262 Calling clear_nbd_subsystem 00:04:32.262 Calling clear_ublk_subsystem 00:04:32.262 Calling clear_vhost_blk_subsystem 00:04:32.262 Calling clear_vhost_scsi_subsystem 00:04:32.262 Calling clear_bdev_subsystem 00:04:32.262 20:25:32 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:32.262 20:25:32 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:32.262 20:25:32 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:32.262 20:25:32 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:32.262 20:25:32 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:32.262 20:25:32 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:32.829 20:25:33 json_config -- json_config/json_config.sh@352 -- # break 00:04:32.829 20:25:33 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:32.829 20:25:33 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:32.829 20:25:33 json_config -- json_config/common.sh@31 -- # local app=target 00:04:32.829 20:25:33 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:32.829 20:25:33 json_config -- json_config/common.sh@35 -- # [[ -n 57283 ]] 00:04:32.829 20:25:33 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57283 00:04:32.829 20:25:33 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:32.829 20:25:33 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:32.829 20:25:33 json_config -- json_config/common.sh@41 -- # kill -0 57283 00:04:32.829 20:25:33 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:33.399 20:25:33 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:33.399 20:25:33 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.399 SPDK target shutdown done 00:04:33.399 INFO: relaunching applications... 00:04:33.399 20:25:33 json_config -- json_config/common.sh@41 -- # kill -0 57283 00:04:33.399 20:25:33 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:33.399 20:25:33 json_config -- json_config/common.sh@43 -- # break 00:04:33.399 20:25:33 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:33.399 20:25:33 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:33.399 20:25:33 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:33.399 20:25:33 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:33.399 20:25:33 json_config -- json_config/common.sh@9 -- # local app=target 00:04:33.399 20:25:33 json_config -- json_config/common.sh@10 -- # shift 00:04:33.399 20:25:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:33.399 20:25:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:33.399 20:25:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:33.399 20:25:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.399 20:25:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.399 20:25:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57484 00:04:33.399 20:25:33 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:33.399 Waiting for target to run... 00:04:33.399 20:25:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:33.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:33.399 20:25:33 json_config -- json_config/common.sh@25 -- # waitforlisten 57484 /var/tmp/spdk_tgt.sock 00:04:33.399 20:25:33 json_config -- common/autotest_common.sh@835 -- # '[' -z 57484 ']' 00:04:33.399 20:25:33 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:33.399 20:25:33 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.399 20:25:33 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:33.399 20:25:33 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.399 20:25:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.399 [2024-11-26 20:25:33.604561] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:04:33.399 [2024-11-26 20:25:33.604670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57484 ] 00:04:33.967 [2024-11-26 20:25:34.026721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.967 [2024-11-26 20:25:34.095028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.967 [2024-11-26 20:25:34.233915] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:34.226 [2024-11-26 20:25:34.452474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:34.226 [2024-11-26 20:25:34.484561] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:34.484 00:04:34.484 INFO: Checking if target configuration is the same... 00:04:34.484 20:25:34 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.484 20:25:34 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:34.484 20:25:34 json_config -- json_config/common.sh@26 -- # echo '' 00:04:34.484 20:25:34 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:34.484 20:25:34 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:34.484 20:25:34 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:34.484 20:25:34 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:34.484 20:25:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:34.484 + '[' 2 -ne 2 ']' 00:04:34.484 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:34.484 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:34.484 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:34.484 +++ basename /dev/fd/62 00:04:34.484 ++ mktemp /tmp/62.XXX 00:04:34.484 + tmp_file_1=/tmp/62.uvj 00:04:34.484 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:34.484 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:34.484 + tmp_file_2=/tmp/spdk_tgt_config.json.0Bu 00:04:34.484 + ret=0 00:04:34.484 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:34.742 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:34.742 + diff -u /tmp/62.uvj /tmp/spdk_tgt_config.json.0Bu 00:04:34.742 INFO: JSON config files are the same 00:04:34.742 + echo 'INFO: JSON config files are the same' 00:04:34.742 + rm /tmp/62.uvj /tmp/spdk_tgt_config.json.0Bu 00:04:35.024 + exit 0 00:04:35.024 20:25:35 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:35.024 INFO: changing configuration and checking if this can be detected... 00:04:35.024 20:25:35 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:35.024 20:25:35 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:35.024 20:25:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:35.024 20:25:35 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:35.024 20:25:35 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:35.024 20:25:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.024 + '[' 2 -ne 2 ']' 00:04:35.024 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:35.024 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:35.024 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:35.283 +++ basename /dev/fd/62 00:04:35.283 ++ mktemp /tmp/62.XXX 00:04:35.283 + tmp_file_1=/tmp/62.d6E 00:04:35.283 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:35.283 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:35.283 + tmp_file_2=/tmp/spdk_tgt_config.json.7Nw 00:04:35.283 + ret=0 00:04:35.283 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:35.541 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:35.541 + diff -u /tmp/62.d6E /tmp/spdk_tgt_config.json.7Nw 00:04:35.541 + ret=1 00:04:35.541 + echo '=== Start of file: /tmp/62.d6E ===' 00:04:35.541 + cat /tmp/62.d6E 00:04:35.541 + echo '=== End of file: /tmp/62.d6E ===' 00:04:35.541 + echo '' 00:04:35.541 + echo '=== Start of file: /tmp/spdk_tgt_config.json.7Nw ===' 00:04:35.541 + cat /tmp/spdk_tgt_config.json.7Nw 00:04:35.541 + echo '=== End of file: /tmp/spdk_tgt_config.json.7Nw ===' 00:04:35.542 + echo '' 00:04:35.542 + rm /tmp/62.d6E /tmp/spdk_tgt_config.json.7Nw 00:04:35.542 + exit 1 00:04:35.542 INFO: configuration change detected. 00:04:35.542 20:25:35 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:35.542 20:25:35 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:35.542 20:25:35 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:35.542 20:25:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.542 20:25:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.542 20:25:35 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:35.542 20:25:35 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:35.542 20:25:35 json_config -- json_config/json_config.sh@324 -- # [[ -n 57484 ]] 00:04:35.542 20:25:35 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:35.542 20:25:35 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:35.542 20:25:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.542 20:25:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.542 20:25:35 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:35.542 20:25:35 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:35.542 20:25:35 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:35.542 20:25:35 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:35.542 20:25:35 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:35.542 20:25:35 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:35.542 20:25:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:35.542 20:25:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.800 20:25:35 json_config -- json_config/json_config.sh@330 -- # killprocess 57484 00:04:35.800 20:25:35 json_config -- common/autotest_common.sh@954 -- # '[' -z 57484 ']' 00:04:35.800 20:25:35 json_config -- common/autotest_common.sh@958 -- # kill -0 57484 00:04:35.800 20:25:35 json_config -- common/autotest_common.sh@959 -- # uname 00:04:35.800 20:25:35 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.800 20:25:35 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57484 00:04:35.800 20:25:35 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.800 killing process with pid 57484 00:04:35.800 20:25:35 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.800 20:25:35 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57484' 00:04:35.800 20:25:35 json_config -- common/autotest_common.sh@973 -- # kill 57484 00:04:35.800 20:25:35 json_config -- common/autotest_common.sh@978 -- # wait 57484 00:04:36.059 20:25:36 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:36.059 20:25:36 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:36.059 20:25:36 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:36.059 20:25:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.059 20:25:36 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:36.059 INFO: Success 00:04:36.059 20:25:36 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:36.059 ************************************ 00:04:36.059 END TEST json_config 00:04:36.059 ************************************ 00:04:36.059 00:04:36.059 real 0m9.149s 00:04:36.059 user 0m13.337s 00:04:36.059 sys 0m1.799s 00:04:36.059 20:25:36 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.059 20:25:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.059 20:25:36 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:36.059 20:25:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.059 20:25:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.059 20:25:36 -- common/autotest_common.sh@10 -- # set +x 00:04:36.059 ************************************ 00:04:36.059 START TEST json_config_extra_key 00:04:36.059 ************************************ 00:04:36.059 20:25:36 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:36.059 20:25:36 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:36.059 20:25:36 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:36.059 20:25:36 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:36.318 20:25:36 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.318 20:25:36 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:36.318 20:25:36 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.318 20:25:36 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:36.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.318 --rc genhtml_branch_coverage=1 00:04:36.318 --rc genhtml_function_coverage=1 00:04:36.318 --rc genhtml_legend=1 00:04:36.318 --rc geninfo_all_blocks=1 00:04:36.318 --rc geninfo_unexecuted_blocks=1 00:04:36.318 00:04:36.318 ' 00:04:36.318 20:25:36 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:36.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.318 --rc genhtml_branch_coverage=1 00:04:36.318 --rc genhtml_function_coverage=1 00:04:36.318 --rc genhtml_legend=1 00:04:36.318 --rc geninfo_all_blocks=1 00:04:36.318 --rc geninfo_unexecuted_blocks=1 00:04:36.318 00:04:36.318 ' 00:04:36.318 20:25:36 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:36.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.318 --rc genhtml_branch_coverage=1 00:04:36.318 --rc genhtml_function_coverage=1 00:04:36.318 --rc genhtml_legend=1 00:04:36.318 --rc geninfo_all_blocks=1 00:04:36.318 --rc geninfo_unexecuted_blocks=1 00:04:36.318 00:04:36.318 ' 00:04:36.318 20:25:36 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:36.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.319 --rc genhtml_branch_coverage=1 00:04:36.319 --rc genhtml_function_coverage=1 00:04:36.319 --rc genhtml_legend=1 00:04:36.319 --rc geninfo_all_blocks=1 00:04:36.319 --rc geninfo_unexecuted_blocks=1 00:04:36.319 00:04:36.319 ' 00:04:36.319 20:25:36 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:36.319 20:25:36 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:36.319 20:25:36 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:36.319 20:25:36 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:36.319 20:25:36 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:36.319 20:25:36 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.319 20:25:36 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.319 20:25:36 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.319 20:25:36 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:36.319 20:25:36 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:36.319 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:36.319 20:25:36 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:36.319 20:25:36 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:36.319 20:25:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:36.319 20:25:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:36.319 20:25:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:36.319 20:25:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:36.319 20:25:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:36.319 20:25:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:36.319 20:25:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:36.319 INFO: launching applications... 00:04:36.319 20:25:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:36.319 20:25:36 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:36.319 20:25:36 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:36.319 20:25:36 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:36.319 20:25:36 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:36.319 20:25:36 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:36.319 20:25:36 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:36.319 20:25:36 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:36.319 20:25:36 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:36.319 20:25:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.319 20:25:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.319 20:25:36 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57638 00:04:36.319 Waiting for target to run... 00:04:36.319 20:25:36 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:36.319 20:25:36 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57638 /var/tmp/spdk_tgt.sock 00:04:36.319 20:25:36 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:36.319 20:25:36 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57638 ']' 00:04:36.319 20:25:36 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:36.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:36.319 20:25:36 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.319 20:25:36 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:36.319 20:25:36 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.319 20:25:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:36.319 [2024-11-26 20:25:36.574383] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:04:36.319 [2024-11-26 20:25:36.574541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57638 ] 00:04:36.885 [2024-11-26 20:25:37.016571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.885 [2024-11-26 20:25:37.082701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.885 [2024-11-26 20:25:37.122220] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:37.453 20:25:37 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.453 00:04:37.453 20:25:37 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:37.453 20:25:37 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:37.453 INFO: shutting down applications... 00:04:37.453 20:25:37 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:37.453 20:25:37 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:37.453 20:25:37 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:37.453 20:25:37 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:37.453 20:25:37 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57638 ]] 00:04:37.453 20:25:37 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57638 00:04:37.453 20:25:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:37.453 20:25:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.453 20:25:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57638 00:04:37.453 20:25:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:37.712 20:25:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:37.712 20:25:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.712 20:25:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57638 00:04:37.712 20:25:38 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:37.712 20:25:38 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:37.712 20:25:38 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:37.712 SPDK target shutdown done 00:04:37.712 20:25:38 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:37.712 Success 00:04:37.712 20:25:38 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:37.712 00:04:37.712 real 0m1.719s 00:04:37.712 user 0m1.578s 00:04:37.712 sys 0m0.473s 00:04:37.712 20:25:38 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.712 ************************************ 00:04:37.712 END TEST json_config_extra_key 00:04:37.712 ************************************ 00:04:37.712 20:25:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:37.712 20:25:38 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:37.712 20:25:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.712 20:25:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.712 20:25:38 -- common/autotest_common.sh@10 -- # set +x 00:04:37.712 ************************************ 00:04:37.712 START TEST alias_rpc 00:04:37.712 ************************************ 00:04:37.712 20:25:38 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:37.971 * Looking for test storage... 00:04:37.971 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:37.971 20:25:38 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:37.971 20:25:38 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:37.971 20:25:38 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:37.972 20:25:38 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.972 20:25:38 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:37.972 20:25:38 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.972 20:25:38 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:37.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.972 --rc genhtml_branch_coverage=1 00:04:37.972 --rc genhtml_function_coverage=1 00:04:37.972 --rc genhtml_legend=1 00:04:37.972 --rc geninfo_all_blocks=1 00:04:37.972 --rc geninfo_unexecuted_blocks=1 00:04:37.972 00:04:37.972 ' 00:04:37.972 20:25:38 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:37.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.972 --rc genhtml_branch_coverage=1 00:04:37.972 --rc genhtml_function_coverage=1 00:04:37.972 --rc genhtml_legend=1 00:04:37.972 --rc geninfo_all_blocks=1 00:04:37.972 --rc geninfo_unexecuted_blocks=1 00:04:37.972 00:04:37.972 ' 00:04:37.972 20:25:38 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:37.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.972 --rc genhtml_branch_coverage=1 00:04:37.972 --rc genhtml_function_coverage=1 00:04:37.972 --rc genhtml_legend=1 00:04:37.972 --rc geninfo_all_blocks=1 00:04:37.972 --rc geninfo_unexecuted_blocks=1 00:04:37.972 00:04:37.972 ' 00:04:37.972 20:25:38 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:37.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.972 --rc genhtml_branch_coverage=1 00:04:37.972 --rc genhtml_function_coverage=1 00:04:37.972 --rc genhtml_legend=1 00:04:37.972 --rc geninfo_all_blocks=1 00:04:37.972 --rc geninfo_unexecuted_blocks=1 00:04:37.972 00:04:37.972 ' 00:04:37.972 20:25:38 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:37.972 20:25:38 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57716 00:04:37.972 20:25:38 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.972 20:25:38 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57716 00:04:37.972 20:25:38 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57716 ']' 00:04:37.972 20:25:38 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.972 20:25:38 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.972 20:25:38 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.972 20:25:38 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.972 20:25:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.972 [2024-11-26 20:25:38.304341] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:04:37.972 [2024-11-26 20:25:38.304503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57716 ] 00:04:38.231 [2024-11-26 20:25:38.458612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.231 [2024-11-26 20:25:38.521479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.489 [2024-11-26 20:25:38.601103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:39.057 20:25:39 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.057 20:25:39 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:39.057 20:25:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:39.315 20:25:39 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57716 00:04:39.315 20:25:39 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57716 ']' 00:04:39.316 20:25:39 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57716 00:04:39.316 20:25:39 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:39.316 20:25:39 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.316 20:25:39 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57716 00:04:39.316 killing process with pid 57716 00:04:39.316 20:25:39 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.316 20:25:39 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.316 20:25:39 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57716' 00:04:39.316 20:25:39 alias_rpc -- common/autotest_common.sh@973 -- # kill 57716 00:04:39.316 20:25:39 alias_rpc -- common/autotest_common.sh@978 -- # wait 57716 00:04:39.887 ************************************ 00:04:39.887 END TEST alias_rpc 00:04:39.887 ************************************ 00:04:39.887 00:04:39.887 real 0m1.959s 00:04:39.887 user 0m2.277s 00:04:39.887 sys 0m0.458s 00:04:39.887 20:25:40 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.887 20:25:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.887 20:25:40 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:39.887 20:25:40 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:39.887 20:25:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.887 20:25:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.887 20:25:40 -- common/autotest_common.sh@10 -- # set +x 00:04:39.887 ************************************ 00:04:39.887 START TEST spdkcli_tcp 00:04:39.887 ************************************ 00:04:39.887 20:25:40 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:39.888 * Looking for test storage... 00:04:39.888 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:39.888 20:25:40 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.888 20:25:40 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:39.888 20:25:40 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.888 20:25:40 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.888 20:25:40 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:39.888 20:25:40 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.888 20:25:40 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:39.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.888 --rc genhtml_branch_coverage=1 00:04:39.888 --rc genhtml_function_coverage=1 00:04:39.888 --rc genhtml_legend=1 00:04:39.888 --rc geninfo_all_blocks=1 00:04:39.888 --rc geninfo_unexecuted_blocks=1 00:04:39.888 00:04:39.888 ' 00:04:39.888 20:25:40 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:39.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.888 --rc genhtml_branch_coverage=1 00:04:39.888 --rc genhtml_function_coverage=1 00:04:39.888 --rc genhtml_legend=1 00:04:39.888 --rc geninfo_all_blocks=1 00:04:39.888 --rc geninfo_unexecuted_blocks=1 00:04:39.888 00:04:39.888 ' 00:04:39.888 20:25:40 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:39.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.888 --rc genhtml_branch_coverage=1 00:04:39.888 --rc genhtml_function_coverage=1 00:04:39.888 --rc genhtml_legend=1 00:04:39.888 --rc geninfo_all_blocks=1 00:04:39.888 --rc geninfo_unexecuted_blocks=1 00:04:39.888 00:04:39.888 ' 00:04:39.888 20:25:40 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:39.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.888 --rc genhtml_branch_coverage=1 00:04:39.888 --rc genhtml_function_coverage=1 00:04:39.888 --rc genhtml_legend=1 00:04:39.888 --rc geninfo_all_blocks=1 00:04:39.888 --rc geninfo_unexecuted_blocks=1 00:04:39.888 00:04:39.888 ' 00:04:39.888 20:25:40 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:39.888 20:25:40 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:39.888 20:25:40 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:39.888 20:25:40 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:39.888 20:25:40 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:39.888 20:25:40 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:39.888 20:25:40 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:39.888 20:25:40 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:39.888 20:25:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.888 20:25:40 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57800 00:04:39.888 20:25:40 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57800 00:04:39.888 20:25:40 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57800 ']' 00:04:39.888 20:25:40 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:39.888 20:25:40 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.888 20:25:40 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.888 20:25:40 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.888 20:25:40 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.888 20:25:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.148 [2024-11-26 20:25:40.287264] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:04:40.148 [2024-11-26 20:25:40.287351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57800 ] 00:04:40.148 [2024-11-26 20:25:40.429244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:40.148 [2024-11-26 20:25:40.491160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.148 [2024-11-26 20:25:40.491167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.407 [2024-11-26 20:25:40.565791] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:40.407 20:25:40 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.407 20:25:40 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:40.407 20:25:40 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57804 00:04:40.407 20:25:40 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:40.407 20:25:40 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:40.667 [ 00:04:40.667 "bdev_malloc_delete", 00:04:40.667 "bdev_malloc_create", 00:04:40.667 "bdev_null_resize", 00:04:40.667 "bdev_null_delete", 00:04:40.667 "bdev_null_create", 00:04:40.667 "bdev_nvme_cuse_unregister", 00:04:40.667 "bdev_nvme_cuse_register", 00:04:40.667 "bdev_opal_new_user", 00:04:40.667 "bdev_opal_set_lock_state", 00:04:40.667 "bdev_opal_delete", 00:04:40.667 "bdev_opal_get_info", 00:04:40.667 "bdev_opal_create", 00:04:40.667 "bdev_nvme_opal_revert", 00:04:40.667 "bdev_nvme_opal_init", 00:04:40.667 "bdev_nvme_send_cmd", 00:04:40.667 "bdev_nvme_set_keys", 00:04:40.667 "bdev_nvme_get_path_iostat", 00:04:40.667 "bdev_nvme_get_mdns_discovery_info", 00:04:40.667 "bdev_nvme_stop_mdns_discovery", 00:04:40.667 "bdev_nvme_start_mdns_discovery", 00:04:40.667 "bdev_nvme_set_multipath_policy", 00:04:40.667 "bdev_nvme_set_preferred_path", 00:04:40.667 "bdev_nvme_get_io_paths", 00:04:40.667 "bdev_nvme_remove_error_injection", 00:04:40.667 "bdev_nvme_add_error_injection", 00:04:40.667 "bdev_nvme_get_discovery_info", 00:04:40.667 "bdev_nvme_stop_discovery", 00:04:40.667 "bdev_nvme_start_discovery", 00:04:40.667 "bdev_nvme_get_controller_health_info", 00:04:40.667 "bdev_nvme_disable_controller", 00:04:40.667 "bdev_nvme_enable_controller", 00:04:40.667 "bdev_nvme_reset_controller", 00:04:40.667 "bdev_nvme_get_transport_statistics", 00:04:40.667 "bdev_nvme_apply_firmware", 00:04:40.667 "bdev_nvme_detach_controller", 00:04:40.667 "bdev_nvme_get_controllers", 00:04:40.667 "bdev_nvme_attach_controller", 00:04:40.667 "bdev_nvme_set_hotplug", 00:04:40.667 "bdev_nvme_set_options", 00:04:40.667 "bdev_passthru_delete", 00:04:40.667 "bdev_passthru_create", 00:04:40.667 "bdev_lvol_set_parent_bdev", 00:04:40.667 "bdev_lvol_set_parent", 00:04:40.667 "bdev_lvol_check_shallow_copy", 00:04:40.667 "bdev_lvol_start_shallow_copy", 00:04:40.667 "bdev_lvol_grow_lvstore", 00:04:40.667 "bdev_lvol_get_lvols", 00:04:40.667 "bdev_lvol_get_lvstores", 00:04:40.667 "bdev_lvol_delete", 00:04:40.667 "bdev_lvol_set_read_only", 00:04:40.667 "bdev_lvol_resize", 00:04:40.667 "bdev_lvol_decouple_parent", 00:04:40.667 "bdev_lvol_inflate", 00:04:40.667 "bdev_lvol_rename", 00:04:40.667 "bdev_lvol_clone_bdev", 00:04:40.667 "bdev_lvol_clone", 00:04:40.667 "bdev_lvol_snapshot", 00:04:40.667 "bdev_lvol_create", 00:04:40.667 "bdev_lvol_delete_lvstore", 00:04:40.667 "bdev_lvol_rename_lvstore", 00:04:40.667 "bdev_lvol_create_lvstore", 00:04:40.667 "bdev_raid_set_options", 00:04:40.667 "bdev_raid_remove_base_bdev", 00:04:40.667 "bdev_raid_add_base_bdev", 00:04:40.667 "bdev_raid_delete", 00:04:40.667 "bdev_raid_create", 00:04:40.667 "bdev_raid_get_bdevs", 00:04:40.667 "bdev_error_inject_error", 00:04:40.667 "bdev_error_delete", 00:04:40.667 "bdev_error_create", 00:04:40.667 "bdev_split_delete", 00:04:40.667 "bdev_split_create", 00:04:40.667 "bdev_delay_delete", 00:04:40.667 "bdev_delay_create", 00:04:40.667 "bdev_delay_update_latency", 00:04:40.667 "bdev_zone_block_delete", 00:04:40.667 "bdev_zone_block_create", 00:04:40.667 "blobfs_create", 00:04:40.667 "blobfs_detect", 00:04:40.667 "blobfs_set_cache_size", 00:04:40.667 "bdev_aio_delete", 00:04:40.667 "bdev_aio_rescan", 00:04:40.667 "bdev_aio_create", 00:04:40.667 "bdev_ftl_set_property", 00:04:40.667 "bdev_ftl_get_properties", 00:04:40.667 "bdev_ftl_get_stats", 00:04:40.668 "bdev_ftl_unmap", 00:04:40.668 "bdev_ftl_unload", 00:04:40.668 "bdev_ftl_delete", 00:04:40.668 "bdev_ftl_load", 00:04:40.668 "bdev_ftl_create", 00:04:40.668 "bdev_virtio_attach_controller", 00:04:40.668 "bdev_virtio_scsi_get_devices", 00:04:40.668 "bdev_virtio_detach_controller", 00:04:40.668 "bdev_virtio_blk_set_hotplug", 00:04:40.668 "bdev_iscsi_delete", 00:04:40.668 "bdev_iscsi_create", 00:04:40.668 "bdev_iscsi_set_options", 00:04:40.668 "bdev_uring_delete", 00:04:40.668 "bdev_uring_rescan", 00:04:40.668 "bdev_uring_create", 00:04:40.668 "accel_error_inject_error", 00:04:40.668 "ioat_scan_accel_module", 00:04:40.668 "dsa_scan_accel_module", 00:04:40.668 "iaa_scan_accel_module", 00:04:40.668 "keyring_file_remove_key", 00:04:40.668 "keyring_file_add_key", 00:04:40.668 "keyring_linux_set_options", 00:04:40.668 "fsdev_aio_delete", 00:04:40.668 "fsdev_aio_create", 00:04:40.668 "iscsi_get_histogram", 00:04:40.668 "iscsi_enable_histogram", 00:04:40.668 "iscsi_set_options", 00:04:40.668 "iscsi_get_auth_groups", 00:04:40.668 "iscsi_auth_group_remove_secret", 00:04:40.668 "iscsi_auth_group_add_secret", 00:04:40.668 "iscsi_delete_auth_group", 00:04:40.668 "iscsi_create_auth_group", 00:04:40.668 "iscsi_set_discovery_auth", 00:04:40.668 "iscsi_get_options", 00:04:40.668 "iscsi_target_node_request_logout", 00:04:40.668 "iscsi_target_node_set_redirect", 00:04:40.668 "iscsi_target_node_set_auth", 00:04:40.668 "iscsi_target_node_add_lun", 00:04:40.668 "iscsi_get_stats", 00:04:40.668 "iscsi_get_connections", 00:04:40.668 "iscsi_portal_group_set_auth", 00:04:40.668 "iscsi_start_portal_group", 00:04:40.668 "iscsi_delete_portal_group", 00:04:40.668 "iscsi_create_portal_group", 00:04:40.668 "iscsi_get_portal_groups", 00:04:40.668 "iscsi_delete_target_node", 00:04:40.668 "iscsi_target_node_remove_pg_ig_maps", 00:04:40.668 "iscsi_target_node_add_pg_ig_maps", 00:04:40.668 "iscsi_create_target_node", 00:04:40.668 "iscsi_get_target_nodes", 00:04:40.668 "iscsi_delete_initiator_group", 00:04:40.668 "iscsi_initiator_group_remove_initiators", 00:04:40.668 "iscsi_initiator_group_add_initiators", 00:04:40.668 "iscsi_create_initiator_group", 00:04:40.668 "iscsi_get_initiator_groups", 00:04:40.668 "nvmf_set_crdt", 00:04:40.668 "nvmf_set_config", 00:04:40.668 "nvmf_set_max_subsystems", 00:04:40.668 "nvmf_stop_mdns_prr", 00:04:40.668 "nvmf_publish_mdns_prr", 00:04:40.668 "nvmf_subsystem_get_listeners", 00:04:40.668 "nvmf_subsystem_get_qpairs", 00:04:40.668 "nvmf_subsystem_get_controllers", 00:04:40.668 "nvmf_get_stats", 00:04:40.668 "nvmf_get_transports", 00:04:40.668 "nvmf_create_transport", 00:04:40.668 "nvmf_get_targets", 00:04:40.668 "nvmf_delete_target", 00:04:40.668 "nvmf_create_target", 00:04:40.668 "nvmf_subsystem_allow_any_host", 00:04:40.668 "nvmf_subsystem_set_keys", 00:04:40.668 "nvmf_subsystem_remove_host", 00:04:40.668 "nvmf_subsystem_add_host", 00:04:40.668 "nvmf_ns_remove_host", 00:04:40.668 "nvmf_ns_add_host", 00:04:40.668 "nvmf_subsystem_remove_ns", 00:04:40.668 "nvmf_subsystem_set_ns_ana_group", 00:04:40.668 "nvmf_subsystem_add_ns", 00:04:40.668 "nvmf_subsystem_listener_set_ana_state", 00:04:40.668 "nvmf_discovery_get_referrals", 00:04:40.668 "nvmf_discovery_remove_referral", 00:04:40.668 "nvmf_discovery_add_referral", 00:04:40.668 "nvmf_subsystem_remove_listener", 00:04:40.668 "nvmf_subsystem_add_listener", 00:04:40.668 "nvmf_delete_subsystem", 00:04:40.668 "nvmf_create_subsystem", 00:04:40.668 "nvmf_get_subsystems", 00:04:40.668 "env_dpdk_get_mem_stats", 00:04:40.668 "nbd_get_disks", 00:04:40.668 "nbd_stop_disk", 00:04:40.668 "nbd_start_disk", 00:04:40.668 "ublk_recover_disk", 00:04:40.668 "ublk_get_disks", 00:04:40.668 "ublk_stop_disk", 00:04:40.668 "ublk_start_disk", 00:04:40.668 "ublk_destroy_target", 00:04:40.668 "ublk_create_target", 00:04:40.668 "virtio_blk_create_transport", 00:04:40.668 "virtio_blk_get_transports", 00:04:40.668 "vhost_controller_set_coalescing", 00:04:40.668 "vhost_get_controllers", 00:04:40.668 "vhost_delete_controller", 00:04:40.668 "vhost_create_blk_controller", 00:04:40.668 "vhost_scsi_controller_remove_target", 00:04:40.668 "vhost_scsi_controller_add_target", 00:04:40.668 "vhost_start_scsi_controller", 00:04:40.668 "vhost_create_scsi_controller", 00:04:40.668 "thread_set_cpumask", 00:04:40.668 "scheduler_set_options", 00:04:40.668 "framework_get_governor", 00:04:40.668 "framework_get_scheduler", 00:04:40.668 "framework_set_scheduler", 00:04:40.668 "framework_get_reactors", 00:04:40.668 "thread_get_io_channels", 00:04:40.668 "thread_get_pollers", 00:04:40.668 "thread_get_stats", 00:04:40.668 "framework_monitor_context_switch", 00:04:40.668 "spdk_kill_instance", 00:04:40.668 "log_enable_timestamps", 00:04:40.668 "log_get_flags", 00:04:40.668 "log_clear_flag", 00:04:40.668 "log_set_flag", 00:04:40.668 "log_get_level", 00:04:40.668 "log_set_level", 00:04:40.668 "log_get_print_level", 00:04:40.668 "log_set_print_level", 00:04:40.668 "framework_enable_cpumask_locks", 00:04:40.668 "framework_disable_cpumask_locks", 00:04:40.668 "framework_wait_init", 00:04:40.668 "framework_start_init", 00:04:40.668 "scsi_get_devices", 00:04:40.668 "bdev_get_histogram", 00:04:40.668 "bdev_enable_histogram", 00:04:40.668 "bdev_set_qos_limit", 00:04:40.668 "bdev_set_qd_sampling_period", 00:04:40.668 "bdev_get_bdevs", 00:04:40.668 "bdev_reset_iostat", 00:04:40.668 "bdev_get_iostat", 00:04:40.668 "bdev_examine", 00:04:40.668 "bdev_wait_for_examine", 00:04:40.668 "bdev_set_options", 00:04:40.668 "accel_get_stats", 00:04:40.668 "accel_set_options", 00:04:40.668 "accel_set_driver", 00:04:40.668 "accel_crypto_key_destroy", 00:04:40.668 "accel_crypto_keys_get", 00:04:40.668 "accel_crypto_key_create", 00:04:40.668 "accel_assign_opc", 00:04:40.668 "accel_get_module_info", 00:04:40.668 "accel_get_opc_assignments", 00:04:40.668 "vmd_rescan", 00:04:40.668 "vmd_remove_device", 00:04:40.668 "vmd_enable", 00:04:40.668 "sock_get_default_impl", 00:04:40.668 "sock_set_default_impl", 00:04:40.668 "sock_impl_set_options", 00:04:40.668 "sock_impl_get_options", 00:04:40.668 "iobuf_get_stats", 00:04:40.668 "iobuf_set_options", 00:04:40.668 "keyring_get_keys", 00:04:40.668 "framework_get_pci_devices", 00:04:40.668 "framework_get_config", 00:04:40.668 "framework_get_subsystems", 00:04:40.668 "fsdev_set_opts", 00:04:40.668 "fsdev_get_opts", 00:04:40.669 "trace_get_info", 00:04:40.669 "trace_get_tpoint_group_mask", 00:04:40.669 "trace_disable_tpoint_group", 00:04:40.669 "trace_enable_tpoint_group", 00:04:40.669 "trace_clear_tpoint_mask", 00:04:40.669 "trace_set_tpoint_mask", 00:04:40.669 "notify_get_notifications", 00:04:40.669 "notify_get_types", 00:04:40.669 "spdk_get_version", 00:04:40.669 "rpc_get_methods" 00:04:40.669 ] 00:04:40.669 20:25:41 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:40.669 20:25:41 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:40.669 20:25:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.928 20:25:41 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:40.928 20:25:41 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57800 00:04:40.928 20:25:41 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57800 ']' 00:04:40.928 20:25:41 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57800 00:04:40.928 20:25:41 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:40.928 20:25:41 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.928 20:25:41 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57800 00:04:40.928 killing process with pid 57800 00:04:40.928 20:25:41 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.928 20:25:41 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.928 20:25:41 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57800' 00:04:40.928 20:25:41 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57800 00:04:40.928 20:25:41 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57800 00:04:41.187 ************************************ 00:04:41.187 END TEST spdkcli_tcp 00:04:41.187 ************************************ 00:04:41.187 00:04:41.187 real 0m1.403s 00:04:41.187 user 0m2.419s 00:04:41.187 sys 0m0.431s 00:04:41.187 20:25:41 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.187 20:25:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:41.187 20:25:41 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:41.187 20:25:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.187 20:25:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.187 20:25:41 -- common/autotest_common.sh@10 -- # set +x 00:04:41.187 ************************************ 00:04:41.187 START TEST dpdk_mem_utility 00:04:41.187 ************************************ 00:04:41.187 20:25:41 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:41.447 * Looking for test storage... 00:04:41.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:41.447 20:25:41 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.447 20:25:41 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.447 20:25:41 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.447 20:25:41 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.447 20:25:41 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:41.447 20:25:41 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.447 20:25:41 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.447 --rc genhtml_branch_coverage=1 00:04:41.447 --rc genhtml_function_coverage=1 00:04:41.447 --rc genhtml_legend=1 00:04:41.447 --rc geninfo_all_blocks=1 00:04:41.447 --rc geninfo_unexecuted_blocks=1 00:04:41.447 00:04:41.447 ' 00:04:41.447 20:25:41 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.447 --rc genhtml_branch_coverage=1 00:04:41.447 --rc genhtml_function_coverage=1 00:04:41.447 --rc genhtml_legend=1 00:04:41.447 --rc geninfo_all_blocks=1 00:04:41.447 --rc geninfo_unexecuted_blocks=1 00:04:41.447 00:04:41.447 ' 00:04:41.447 20:25:41 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.447 --rc genhtml_branch_coverage=1 00:04:41.447 --rc genhtml_function_coverage=1 00:04:41.447 --rc genhtml_legend=1 00:04:41.447 --rc geninfo_all_blocks=1 00:04:41.447 --rc geninfo_unexecuted_blocks=1 00:04:41.447 00:04:41.447 ' 00:04:41.447 20:25:41 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.447 --rc genhtml_branch_coverage=1 00:04:41.447 --rc genhtml_function_coverage=1 00:04:41.447 --rc genhtml_legend=1 00:04:41.447 --rc geninfo_all_blocks=1 00:04:41.447 --rc geninfo_unexecuted_blocks=1 00:04:41.447 00:04:41.447 ' 00:04:41.447 20:25:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:41.447 20:25:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57886 00:04:41.447 20:25:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.448 20:25:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57886 00:04:41.448 20:25:41 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57886 ']' 00:04:41.448 20:25:41 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.448 20:25:41 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.448 20:25:41 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.448 20:25:41 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.448 20:25:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:41.448 [2024-11-26 20:25:41.763136] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:04:41.448 [2024-11-26 20:25:41.763590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57886 ] 00:04:41.706 [2024-11-26 20:25:41.909521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.706 [2024-11-26 20:25:41.972341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.706 [2024-11-26 20:25:42.044837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:42.643 20:25:42 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.643 20:25:42 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:42.643 20:25:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:42.643 20:25:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:42.643 20:25:42 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.643 20:25:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:42.643 { 00:04:42.643 "filename": "/tmp/spdk_mem_dump.txt" 00:04:42.643 } 00:04:42.643 20:25:42 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.643 20:25:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:42.643 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:42.643 1 heaps totaling size 818.000000 MiB 00:04:42.643 size: 818.000000 MiB heap id: 0 00:04:42.643 end heaps---------- 00:04:42.643 9 mempools totaling size 603.782043 MiB 00:04:42.643 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:42.643 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:42.643 size: 100.555481 MiB name: bdev_io_57886 00:04:42.643 size: 50.003479 MiB name: msgpool_57886 00:04:42.643 size: 36.509338 MiB name: fsdev_io_57886 00:04:42.643 size: 21.763794 MiB name: PDU_Pool 00:04:42.643 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:42.643 size: 4.133484 MiB name: evtpool_57886 00:04:42.643 size: 0.026123 MiB name: Session_Pool 00:04:42.643 end mempools------- 00:04:42.643 6 memzones totaling size 4.142822 MiB 00:04:42.643 size: 1.000366 MiB name: RG_ring_0_57886 00:04:42.644 size: 1.000366 MiB name: RG_ring_1_57886 00:04:42.644 size: 1.000366 MiB name: RG_ring_4_57886 00:04:42.644 size: 1.000366 MiB name: RG_ring_5_57886 00:04:42.644 size: 0.125366 MiB name: RG_ring_2_57886 00:04:42.644 size: 0.015991 MiB name: RG_ring_3_57886 00:04:42.644 end memzones------- 00:04:42.644 20:25:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:42.644 heap id: 0 total size: 818.000000 MiB number of busy elements: 315 number of free elements: 15 00:04:42.644 list of free elements. size: 10.802856 MiB 00:04:42.644 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:42.644 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:42.644 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:42.644 element at address: 0x200000400000 with size: 0.993958 MiB 00:04:42.644 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:42.644 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:42.644 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:42.644 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:42.644 element at address: 0x20001ae00000 with size: 0.568054 MiB 00:04:42.644 element at address: 0x20000a600000 with size: 0.488892 MiB 00:04:42.644 element at address: 0x200000c00000 with size: 0.486267 MiB 00:04:42.644 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:42.644 element at address: 0x200003e00000 with size: 0.480286 MiB 00:04:42.644 element at address: 0x200028200000 with size: 0.395752 MiB 00:04:42.644 element at address: 0x200000800000 with size: 0.351746 MiB 00:04:42.644 list of standard malloc elements. size: 199.268250 MiB 00:04:42.644 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:42.644 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:42.644 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:42.644 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:42.644 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:42.644 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:42.644 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:42.644 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:42.644 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:42.644 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:42.644 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x20000085e580 with size: 0.000183 MiB 00:04:42.644 element at address: 0x20000087e840 with size: 0.000183 MiB 00:04:42.644 element at address: 0x20000087e900 with size: 0.000183 MiB 00:04:42.644 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:04:42.644 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:04:42.644 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:04:42.644 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:04:42.644 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:04:42.644 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:04:42.644 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x20000087f080 with size: 0.000183 MiB 00:04:42.644 element at address: 0x20000087f140 with size: 0.000183 MiB 00:04:42.644 element at address: 0x20000087f200 with size: 0.000183 MiB 00:04:42.644 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x20000087f380 with size: 0.000183 MiB 00:04:42.644 element at address: 0x20000087f440 with size: 0.000183 MiB 00:04:42.644 element at address: 0x20000087f500 with size: 0.000183 MiB 00:04:42.644 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:42.644 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:04:42.644 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:04:42.645 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:04:42.645 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:04:42.645 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:42.645 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:42.645 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:42.645 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:42.645 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:42.645 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:42.645 element at address: 0x200028265500 with size: 0.000183 MiB 00:04:42.645 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826c480 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826c540 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826c600 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826c780 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826c840 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826c900 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826d080 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826d140 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826d200 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826d380 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826d440 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826d500 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826d680 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826d740 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826d800 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826d980 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826da40 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826db00 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826de00 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826df80 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826e040 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826e100 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826e280 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826e340 with size: 0.000183 MiB 00:04:42.645 element at address: 0x20002826e400 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826e580 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826e640 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826e700 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826e880 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826e940 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826f000 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826f180 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826f240 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826f300 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826f480 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826f540 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826f600 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826f780 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826f840 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826f900 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:42.646 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:42.646 list of memzone associated elements. size: 607.928894 MiB 00:04:42.646 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:42.646 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:42.646 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:42.646 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:42.646 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:42.646 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57886_0 00:04:42.646 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:42.646 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57886_0 00:04:42.646 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:42.646 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57886_0 00:04:42.646 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:42.646 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:42.646 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:42.646 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:42.646 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:42.646 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57886_0 00:04:42.646 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:42.646 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57886 00:04:42.646 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:42.646 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57886 00:04:42.646 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:42.646 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:42.646 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:42.646 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:42.646 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:42.646 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:42.646 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:42.646 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:42.646 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:42.646 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57886 00:04:42.646 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:42.646 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57886 00:04:42.646 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:42.646 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57886 00:04:42.646 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:42.646 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57886 00:04:42.646 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:42.646 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57886 00:04:42.646 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:42.646 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57886 00:04:42.646 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:42.646 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:42.646 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:42.646 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:42.646 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:42.646 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:42.646 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:42.646 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57886 00:04:42.646 element at address: 0x20000085e640 with size: 0.125488 MiB 00:04:42.646 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57886 00:04:42.646 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:42.646 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:42.646 element at address: 0x200028265680 with size: 0.023743 MiB 00:04:42.646 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:42.646 element at address: 0x20000085a380 with size: 0.016113 MiB 00:04:42.646 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57886 00:04:42.646 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:04:42.646 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:42.646 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:04:42.646 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57886 00:04:42.646 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:42.646 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57886 00:04:42.646 element at address: 0x20000085a180 with size: 0.000305 MiB 00:04:42.646 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57886 00:04:42.646 element at address: 0x20002826c280 with size: 0.000305 MiB 00:04:42.646 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:42.646 20:25:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:42.646 20:25:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57886 00:04:42.646 20:25:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57886 ']' 00:04:42.646 20:25:42 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57886 00:04:42.646 20:25:42 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:42.646 20:25:42 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.646 20:25:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57886 00:04:42.646 killing process with pid 57886 00:04:42.646 20:25:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.646 20:25:42 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.646 20:25:42 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57886' 00:04:42.646 20:25:42 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57886 00:04:42.646 20:25:42 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57886 00:04:43.215 ************************************ 00:04:43.215 END TEST dpdk_mem_utility 00:04:43.215 ************************************ 00:04:43.215 00:04:43.215 real 0m1.788s 00:04:43.215 user 0m1.917s 00:04:43.215 sys 0m0.438s 00:04:43.215 20:25:43 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.215 20:25:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:43.215 20:25:43 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:43.215 20:25:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.215 20:25:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.215 20:25:43 -- common/autotest_common.sh@10 -- # set +x 00:04:43.215 ************************************ 00:04:43.215 START TEST event 00:04:43.215 ************************************ 00:04:43.215 20:25:43 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:43.215 * Looking for test storage... 00:04:43.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:43.215 20:25:43 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:43.215 20:25:43 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:43.215 20:25:43 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:43.215 20:25:43 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:43.215 20:25:43 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.215 20:25:43 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.215 20:25:43 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.215 20:25:43 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.215 20:25:43 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.215 20:25:43 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.215 20:25:43 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.215 20:25:43 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.215 20:25:43 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.215 20:25:43 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.215 20:25:43 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.215 20:25:43 event -- scripts/common.sh@344 -- # case "$op" in 00:04:43.215 20:25:43 event -- scripts/common.sh@345 -- # : 1 00:04:43.215 20:25:43 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.215 20:25:43 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.215 20:25:43 event -- scripts/common.sh@365 -- # decimal 1 00:04:43.215 20:25:43 event -- scripts/common.sh@353 -- # local d=1 00:04:43.215 20:25:43 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.215 20:25:43 event -- scripts/common.sh@355 -- # echo 1 00:04:43.215 20:25:43 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.215 20:25:43 event -- scripts/common.sh@366 -- # decimal 2 00:04:43.216 20:25:43 event -- scripts/common.sh@353 -- # local d=2 00:04:43.216 20:25:43 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.216 20:25:43 event -- scripts/common.sh@355 -- # echo 2 00:04:43.216 20:25:43 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.216 20:25:43 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.216 20:25:43 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.216 20:25:43 event -- scripts/common.sh@368 -- # return 0 00:04:43.216 20:25:43 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.216 20:25:43 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:43.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.216 --rc genhtml_branch_coverage=1 00:04:43.216 --rc genhtml_function_coverage=1 00:04:43.216 --rc genhtml_legend=1 00:04:43.216 --rc geninfo_all_blocks=1 00:04:43.216 --rc geninfo_unexecuted_blocks=1 00:04:43.216 00:04:43.216 ' 00:04:43.216 20:25:43 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:43.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.216 --rc genhtml_branch_coverage=1 00:04:43.216 --rc genhtml_function_coverage=1 00:04:43.216 --rc genhtml_legend=1 00:04:43.216 --rc geninfo_all_blocks=1 00:04:43.216 --rc geninfo_unexecuted_blocks=1 00:04:43.216 00:04:43.216 ' 00:04:43.216 20:25:43 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:43.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.216 --rc genhtml_branch_coverage=1 00:04:43.216 --rc genhtml_function_coverage=1 00:04:43.216 --rc genhtml_legend=1 00:04:43.216 --rc geninfo_all_blocks=1 00:04:43.216 --rc geninfo_unexecuted_blocks=1 00:04:43.216 00:04:43.216 ' 00:04:43.216 20:25:43 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:43.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.216 --rc genhtml_branch_coverage=1 00:04:43.216 --rc genhtml_function_coverage=1 00:04:43.216 --rc genhtml_legend=1 00:04:43.216 --rc geninfo_all_blocks=1 00:04:43.216 --rc geninfo_unexecuted_blocks=1 00:04:43.216 00:04:43.216 ' 00:04:43.216 20:25:43 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:43.216 20:25:43 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:43.216 20:25:43 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:43.216 20:25:43 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:43.216 20:25:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.216 20:25:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.216 ************************************ 00:04:43.216 START TEST event_perf 00:04:43.216 ************************************ 00:04:43.216 20:25:43 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:43.216 Running I/O for 1 seconds...[2024-11-26 20:25:43.559623] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:04:43.216 [2024-11-26 20:25:43.560054] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57971 ] 00:04:43.475 [2024-11-26 20:25:43.721959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:43.475 [2024-11-26 20:25:43.796709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.475 [2024-11-26 20:25:43.796848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:43.475 [2024-11-26 20:25:43.796787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.475 [2024-11-26 20:25:43.796856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.884 Running I/O for 1 seconds... 00:04:44.884 lcore 0: 202083 00:04:44.884 lcore 1: 202082 00:04:44.884 lcore 2: 202082 00:04:44.884 lcore 3: 202082 00:04:44.884 done. 00:04:44.884 00:04:44.884 real 0m1.319s 00:04:44.884 user 0m4.120s 00:04:44.884 sys 0m0.077s 00:04:44.884 20:25:44 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.884 ************************************ 00:04:44.884 END TEST event_perf 00:04:44.884 ************************************ 00:04:44.884 20:25:44 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:44.884 20:25:44 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:44.884 20:25:44 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:44.884 20:25:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.884 20:25:44 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.884 ************************************ 00:04:44.884 START TEST event_reactor 00:04:44.884 ************************************ 00:04:44.884 20:25:44 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:44.884 [2024-11-26 20:25:44.916537] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:04:44.884 [2024-11-26 20:25:44.916641] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58004 ] 00:04:44.884 [2024-11-26 20:25:45.061648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.884 [2024-11-26 20:25:45.123536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.823 test_start 00:04:45.823 oneshot 00:04:45.823 tick 100 00:04:45.823 tick 100 00:04:45.823 tick 250 00:04:45.823 tick 100 00:04:45.823 tick 100 00:04:45.823 tick 100 00:04:45.823 tick 250 00:04:45.823 tick 500 00:04:45.823 tick 100 00:04:45.823 tick 100 00:04:45.823 tick 250 00:04:45.823 tick 100 00:04:45.823 tick 100 00:04:45.823 test_end 00:04:45.823 00:04:45.823 real 0m1.271s 00:04:45.823 user 0m1.124s 00:04:45.823 sys 0m0.041s 00:04:45.823 20:25:46 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.823 ************************************ 00:04:45.823 END TEST event_reactor 00:04:45.823 ************************************ 00:04:45.823 20:25:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:46.081 20:25:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:46.081 20:25:46 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:46.081 20:25:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.081 20:25:46 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.081 ************************************ 00:04:46.081 START TEST event_reactor_perf 00:04:46.081 ************************************ 00:04:46.081 20:25:46 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:46.081 [2024-11-26 20:25:46.235411] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:04:46.081 [2024-11-26 20:25:46.235623] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58039 ] 00:04:46.081 [2024-11-26 20:25:46.378869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.338 [2024-11-26 20:25:46.438634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.271 test_start 00:04:47.271 test_end 00:04:47.271 Performance: 374258 events per second 00:04:47.271 00:04:47.271 real 0m1.272s 00:04:47.271 user 0m1.124s 00:04:47.271 sys 0m0.041s 00:04:47.271 20:25:47 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.271 20:25:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:47.271 ************************************ 00:04:47.271 END TEST event_reactor_perf 00:04:47.271 ************************************ 00:04:47.271 20:25:47 event -- event/event.sh@49 -- # uname -s 00:04:47.271 20:25:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:47.271 20:25:47 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:47.271 20:25:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.271 20:25:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.271 20:25:47 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.271 ************************************ 00:04:47.271 START TEST event_scheduler 00:04:47.271 ************************************ 00:04:47.271 20:25:47 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:47.530 * Looking for test storage... 00:04:47.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:47.530 20:25:47 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:47.530 20:25:47 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:47.530 20:25:47 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:47.530 20:25:47 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.530 20:25:47 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:47.530 20:25:47 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.530 20:25:47 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:47.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.530 --rc genhtml_branch_coverage=1 00:04:47.530 --rc genhtml_function_coverage=1 00:04:47.530 --rc genhtml_legend=1 00:04:47.530 --rc geninfo_all_blocks=1 00:04:47.530 --rc geninfo_unexecuted_blocks=1 00:04:47.530 00:04:47.530 ' 00:04:47.530 20:25:47 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:47.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.530 --rc genhtml_branch_coverage=1 00:04:47.530 --rc genhtml_function_coverage=1 00:04:47.530 --rc genhtml_legend=1 00:04:47.530 --rc geninfo_all_blocks=1 00:04:47.530 --rc geninfo_unexecuted_blocks=1 00:04:47.530 00:04:47.530 ' 00:04:47.530 20:25:47 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:47.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.530 --rc genhtml_branch_coverage=1 00:04:47.530 --rc genhtml_function_coverage=1 00:04:47.530 --rc genhtml_legend=1 00:04:47.530 --rc geninfo_all_blocks=1 00:04:47.530 --rc geninfo_unexecuted_blocks=1 00:04:47.530 00:04:47.530 ' 00:04:47.530 20:25:47 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:47.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.530 --rc genhtml_branch_coverage=1 00:04:47.530 --rc genhtml_function_coverage=1 00:04:47.530 --rc genhtml_legend=1 00:04:47.530 --rc geninfo_all_blocks=1 00:04:47.530 --rc geninfo_unexecuted_blocks=1 00:04:47.530 00:04:47.530 ' 00:04:47.530 20:25:47 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:47.530 20:25:47 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58109 00:04:47.530 20:25:47 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:47.530 20:25:47 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.530 20:25:47 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58109 00:04:47.530 20:25:47 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58109 ']' 00:04:47.530 20:25:47 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.530 20:25:47 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.530 20:25:47 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.530 20:25:47 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.530 20:25:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:47.530 [2024-11-26 20:25:47.790569] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:04:47.530 [2024-11-26 20:25:47.790905] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58109 ] 00:04:47.789 [2024-11-26 20:25:47.942432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:47.789 [2024-11-26 20:25:48.017634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.789 [2024-11-26 20:25:48.017774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.789 [2024-11-26 20:25:48.017878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:47.789 [2024-11-26 20:25:48.017882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.789 20:25:48 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.789 20:25:48 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:47.789 20:25:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:47.789 20:25:48 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.789 20:25:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:47.789 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:47.789 POWER: Cannot set governor of lcore 0 to userspace 00:04:47.789 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:47.789 POWER: Cannot set governor of lcore 0 to performance 00:04:47.789 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:47.789 POWER: Cannot set governor of lcore 0 to userspace 00:04:47.789 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:47.789 POWER: Cannot set governor of lcore 0 to userspace 00:04:47.789 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:47.789 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:47.789 POWER: Unable to set Power Management Environment for lcore 0 00:04:47.789 [2024-11-26 20:25:48.101253] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:47.789 [2024-11-26 20:25:48.101394] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:47.789 [2024-11-26 20:25:48.101450] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:47.789 [2024-11-26 20:25:48.101591] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:47.789 [2024-11-26 20:25:48.101721] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:47.789 [2024-11-26 20:25:48.101851] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:47.789 20:25:48 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.789 20:25:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:47.789 20:25:48 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.789 20:25:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.048 [2024-11-26 20:25:48.166639] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:48.048 [2024-11-26 20:25:48.209905] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:48.048 20:25:48 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.048 20:25:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:48.048 20:25:48 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.048 20:25:48 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.048 20:25:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.048 ************************************ 00:04:48.048 START TEST scheduler_create_thread 00:04:48.048 ************************************ 00:04:48.048 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:48.048 20:25:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:48.048 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.048 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.048 2 00:04:48.048 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.048 20:25:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:48.048 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.048 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.048 3 00:04:48.048 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.048 20:25:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:48.048 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.048 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.048 4 00:04:48.048 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.048 20:25:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:48.048 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.048 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.048 5 00:04:48.048 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.048 20:25:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:48.048 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.048 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.048 6 00:04:48.048 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.048 20:25:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:48.049 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.049 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.049 7 00:04:48.049 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.049 20:25:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:48.049 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.049 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.049 8 00:04:48.049 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.049 20:25:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:48.049 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.049 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.049 9 00:04:48.049 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.049 20:25:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:48.049 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.049 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.049 10 00:04:48.049 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.049 20:25:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:48.049 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.049 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.049 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.049 20:25:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:48.049 20:25:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:48.049 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.049 20:25:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.983 20:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.983 20:25:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:48.983 20:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.983 20:25:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.458 20:25:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.458 20:25:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:50.458 20:25:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:50.458 20:25:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.458 20:25:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.394 ************************************ 00:04:51.394 END TEST scheduler_create_thread 00:04:51.394 ************************************ 00:04:51.394 20:25:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.394 00:04:51.394 real 0m3.376s 00:04:51.394 user 0m0.020s 00:04:51.394 sys 0m0.006s 00:04:51.394 20:25:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.394 20:25:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.394 20:25:51 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:51.394 20:25:51 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58109 00:04:51.394 20:25:51 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58109 ']' 00:04:51.395 20:25:51 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58109 00:04:51.395 20:25:51 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:51.395 20:25:51 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.395 20:25:51 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58109 00:04:51.395 killing process with pid 58109 00:04:51.395 20:25:51 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:51.395 20:25:51 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:51.395 20:25:51 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58109' 00:04:51.395 20:25:51 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58109 00:04:51.395 20:25:51 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58109 00:04:51.654 [2024-11-26 20:25:51.979182] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:51.913 ************************************ 00:04:51.913 END TEST event_scheduler 00:04:51.913 ************************************ 00:04:51.913 00:04:51.913 real 0m4.684s 00:04:51.913 user 0m8.171s 00:04:51.913 sys 0m0.361s 00:04:51.913 20:25:52 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.913 20:25:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:52.172 20:25:52 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:52.172 20:25:52 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:52.172 20:25:52 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.172 20:25:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.172 20:25:52 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.172 ************************************ 00:04:52.172 START TEST app_repeat 00:04:52.172 ************************************ 00:04:52.172 20:25:52 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:52.172 20:25:52 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.172 20:25:52 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.172 20:25:52 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:52.172 20:25:52 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.172 20:25:52 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:52.172 20:25:52 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:52.172 20:25:52 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:52.172 20:25:52 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58201 00:04:52.172 20:25:52 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:52.172 20:25:52 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.172 Process app_repeat pid: 58201 00:04:52.172 20:25:52 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58201' 00:04:52.172 20:25:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:52.172 20:25:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:52.172 spdk_app_start Round 0 00:04:52.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:52.172 20:25:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58201 /var/tmp/spdk-nbd.sock 00:04:52.172 20:25:52 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58201 ']' 00:04:52.172 20:25:52 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:52.172 20:25:52 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.172 20:25:52 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:52.172 20:25:52 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.172 20:25:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:52.172 [2024-11-26 20:25:52.315522] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:04:52.172 [2024-11-26 20:25:52.315624] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58201 ] 00:04:52.172 [2024-11-26 20:25:52.458611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.442 [2024-11-26 20:25:52.529265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.442 [2024-11-26 20:25:52.529316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.442 [2024-11-26 20:25:52.587778] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:52.442 20:25:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.442 20:25:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:52.442 20:25:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.699 Malloc0 00:04:52.699 20:25:52 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.985 Malloc1 00:04:52.985 20:25:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.985 20:25:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.985 20:25:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.985 20:25:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:52.985 20:25:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.985 20:25:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:52.985 20:25:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.985 20:25:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.985 20:25:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.985 20:25:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:52.985 20:25:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.985 20:25:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:52.985 20:25:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:52.985 20:25:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:52.985 20:25:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.985 20:25:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:53.246 /dev/nbd0 00:04:53.246 20:25:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:53.246 20:25:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:53.246 20:25:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:53.246 20:25:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:53.246 20:25:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:53.246 20:25:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:53.246 20:25:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:53.246 20:25:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:53.246 20:25:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:53.246 20:25:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:53.246 20:25:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.246 1+0 records in 00:04:53.246 1+0 records out 00:04:53.246 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224452 s, 18.2 MB/s 00:04:53.246 20:25:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.246 20:25:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:53.246 20:25:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.246 20:25:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:53.246 20:25:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:53.246 20:25:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.246 20:25:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.246 20:25:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:53.505 /dev/nbd1 00:04:53.505 20:25:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:53.505 20:25:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:53.505 20:25:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:53.505 20:25:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:53.505 20:25:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:53.505 20:25:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:53.505 20:25:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:53.764 20:25:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:53.764 20:25:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:53.764 20:25:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:53.764 20:25:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.764 1+0 records in 00:04:53.764 1+0 records out 00:04:53.764 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384882 s, 10.6 MB/s 00:04:53.764 20:25:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.764 20:25:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:53.764 20:25:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.764 20:25:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:53.764 20:25:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:53.764 20:25:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.764 20:25:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.765 20:25:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.765 20:25:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.765 20:25:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:54.024 { 00:04:54.024 "nbd_device": "/dev/nbd0", 00:04:54.024 "bdev_name": "Malloc0" 00:04:54.024 }, 00:04:54.024 { 00:04:54.024 "nbd_device": "/dev/nbd1", 00:04:54.024 "bdev_name": "Malloc1" 00:04:54.024 } 00:04:54.024 ]' 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:54.024 { 00:04:54.024 "nbd_device": "/dev/nbd0", 00:04:54.024 "bdev_name": "Malloc0" 00:04:54.024 }, 00:04:54.024 { 00:04:54.024 "nbd_device": "/dev/nbd1", 00:04:54.024 "bdev_name": "Malloc1" 00:04:54.024 } 00:04:54.024 ]' 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:54.024 /dev/nbd1' 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:54.024 /dev/nbd1' 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:54.024 256+0 records in 00:04:54.024 256+0 records out 00:04:54.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0047129 s, 222 MB/s 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:54.024 256+0 records in 00:04:54.024 256+0 records out 00:04:54.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233306 s, 44.9 MB/s 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:54.024 256+0 records in 00:04:54.024 256+0 records out 00:04:54.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250588 s, 41.8 MB/s 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.024 20:25:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:54.283 20:25:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:54.283 20:25:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:54.283 20:25:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:54.283 20:25:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.283 20:25:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.283 20:25:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:54.283 20:25:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:54.283 20:25:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.283 20:25:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.284 20:25:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:54.542 20:25:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:54.542 20:25:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:54.542 20:25:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:54.542 20:25:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.542 20:25:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.542 20:25:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:54.542 20:25:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:54.542 20:25:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.542 20:25:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.542 20:25:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.542 20:25:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.802 20:25:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:54.802 20:25:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:54.802 20:25:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.802 20:25:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:55.062 20:25:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.062 20:25:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:55.062 20:25:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:55.062 20:25:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:55.062 20:25:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:55.062 20:25:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:55.062 20:25:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:55.062 20:25:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:55.062 20:25:55 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:55.321 20:25:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:55.321 [2024-11-26 20:25:55.669109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.580 [2024-11-26 20:25:55.733153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.580 [2024-11-26 20:25:55.733167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.580 [2024-11-26 20:25:55.787013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:55.580 [2024-11-26 20:25:55.787099] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:55.580 [2024-11-26 20:25:55.787113] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:58.163 spdk_app_start Round 1 00:04:58.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:58.163 20:25:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:58.163 20:25:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:58.163 20:25:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58201 /var/tmp/spdk-nbd.sock 00:04:58.163 20:25:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58201 ']' 00:04:58.163 20:25:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:58.163 20:25:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.163 20:25:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:58.163 20:25:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.163 20:25:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:58.730 20:25:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.730 20:25:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:58.730 20:25:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.989 Malloc0 00:04:58.989 20:25:59 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.248 Malloc1 00:04:59.248 20:25:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.248 20:25:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.248 20:25:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.248 20:25:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:59.248 20:25:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.248 20:25:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:59.249 20:25:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.249 20:25:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.249 20:25:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.249 20:25:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:59.249 20:25:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.249 20:25:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:59.249 20:25:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:59.249 20:25:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:59.249 20:25:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.249 20:25:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:59.580 /dev/nbd0 00:04:59.580 20:25:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:59.580 20:25:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:59.580 20:25:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:59.580 20:25:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:59.580 20:25:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:59.580 20:25:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:59.580 20:25:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:59.580 20:25:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:59.580 20:25:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:59.580 20:25:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:59.580 20:25:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.580 1+0 records in 00:04:59.580 1+0 records out 00:04:59.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413531 s, 9.9 MB/s 00:04:59.580 20:25:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.580 20:25:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:59.580 20:25:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.580 20:25:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:59.580 20:25:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:59.580 20:25:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.580 20:25:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.580 20:25:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:59.861 /dev/nbd1 00:04:59.861 20:26:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:59.861 20:26:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:59.861 20:26:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:59.861 20:26:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:59.861 20:26:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:59.861 20:26:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:59.861 20:26:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:59.861 20:26:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:59.861 20:26:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:59.861 20:26:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:59.861 20:26:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.861 1+0 records in 00:04:59.861 1+0 records out 00:04:59.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300597 s, 13.6 MB/s 00:04:59.861 20:26:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.861 20:26:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:59.861 20:26:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.861 20:26:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:59.861 20:26:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:59.861 20:26:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.861 20:26:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.120 20:26:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.120 20:26:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.120 20:26:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:00.379 { 00:05:00.379 "nbd_device": "/dev/nbd0", 00:05:00.379 "bdev_name": "Malloc0" 00:05:00.379 }, 00:05:00.379 { 00:05:00.379 "nbd_device": "/dev/nbd1", 00:05:00.379 "bdev_name": "Malloc1" 00:05:00.379 } 00:05:00.379 ]' 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:00.379 { 00:05:00.379 "nbd_device": "/dev/nbd0", 00:05:00.379 "bdev_name": "Malloc0" 00:05:00.379 }, 00:05:00.379 { 00:05:00.379 "nbd_device": "/dev/nbd1", 00:05:00.379 "bdev_name": "Malloc1" 00:05:00.379 } 00:05:00.379 ]' 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:00.379 /dev/nbd1' 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:00.379 /dev/nbd1' 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:00.379 256+0 records in 00:05:00.379 256+0 records out 00:05:00.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00804315 s, 130 MB/s 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:00.379 256+0 records in 00:05:00.379 256+0 records out 00:05:00.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230266 s, 45.5 MB/s 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:00.379 256+0 records in 00:05:00.379 256+0 records out 00:05:00.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0304347 s, 34.5 MB/s 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:00.379 20:26:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.380 20:26:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.380 20:26:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:00.380 20:26:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:00.380 20:26:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.380 20:26:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:00.639 20:26:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:00.639 20:26:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:00.639 20:26:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:00.639 20:26:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.639 20:26:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.639 20:26:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:00.639 20:26:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.639 20:26:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.639 20:26:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.639 20:26:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:00.898 20:26:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:00.898 20:26:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:00.898 20:26:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:00.898 20:26:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.898 20:26:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.898 20:26:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:00.898 20:26:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.898 20:26:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.898 20:26:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.898 20:26:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.898 20:26:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.465 20:26:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:01.465 20:26:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:01.465 20:26:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.465 20:26:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:01.465 20:26:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:01.465 20:26:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.465 20:26:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:01.465 20:26:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:01.465 20:26:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:01.465 20:26:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:01.465 20:26:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:01.465 20:26:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:01.465 20:26:01 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:01.724 20:26:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:01.724 [2024-11-26 20:26:02.064683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.983 [2024-11-26 20:26:02.122583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.983 [2024-11-26 20:26:02.122597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.983 [2024-11-26 20:26:02.176637] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:01.983 [2024-11-26 20:26:02.176731] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:01.983 [2024-11-26 20:26:02.176744] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:05.266 spdk_app_start Round 2 00:05:05.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.266 20:26:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:05.266 20:26:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:05.266 20:26:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58201 /var/tmp/spdk-nbd.sock 00:05:05.266 20:26:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58201 ']' 00:05:05.266 20:26:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.266 20:26:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.266 20:26:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.266 20:26:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.266 20:26:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.266 20:26:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.266 20:26:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:05.266 20:26:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.266 Malloc0 00:05:05.266 20:26:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.524 Malloc1 00:05:05.524 20:26:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.524 20:26:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.524 20:26:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.524 20:26:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:05.524 20:26:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.524 20:26:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:05.524 20:26:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.524 20:26:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.524 20:26:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.524 20:26:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:05.524 20:26:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.524 20:26:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:05.524 20:26:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:05.524 20:26:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:05.524 20:26:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.524 20:26:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:05.782 /dev/nbd0 00:05:06.039 20:26:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:06.039 20:26:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:06.039 20:26:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:06.040 20:26:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:06.040 20:26:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:06.040 20:26:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:06.040 20:26:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:06.040 20:26:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:06.040 20:26:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:06.040 20:26:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:06.040 20:26:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.040 1+0 records in 00:05:06.040 1+0 records out 00:05:06.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292251 s, 14.0 MB/s 00:05:06.040 20:26:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.040 20:26:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:06.040 20:26:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.040 20:26:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:06.040 20:26:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:06.040 20:26:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.040 20:26:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.040 20:26:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:06.297 /dev/nbd1 00:05:06.297 20:26:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:06.297 20:26:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:06.297 20:26:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:06.298 20:26:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:06.298 20:26:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:06.298 20:26:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:06.298 20:26:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:06.298 20:26:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:06.298 20:26:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:06.298 20:26:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:06.298 20:26:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.298 1+0 records in 00:05:06.298 1+0 records out 00:05:06.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294968 s, 13.9 MB/s 00:05:06.298 20:26:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.298 20:26:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:06.298 20:26:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.298 20:26:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:06.298 20:26:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:06.298 20:26:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.298 20:26:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.298 20:26:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.298 20:26:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.298 20:26:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.556 20:26:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.556 { 00:05:06.556 "nbd_device": "/dev/nbd0", 00:05:06.556 "bdev_name": "Malloc0" 00:05:06.556 }, 00:05:06.556 { 00:05:06.556 "nbd_device": "/dev/nbd1", 00:05:06.556 "bdev_name": "Malloc1" 00:05:06.556 } 00:05:06.556 ]' 00:05:06.556 20:26:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.556 { 00:05:06.556 "nbd_device": "/dev/nbd0", 00:05:06.556 "bdev_name": "Malloc0" 00:05:06.556 }, 00:05:06.556 { 00:05:06.556 "nbd_device": "/dev/nbd1", 00:05:06.556 "bdev_name": "Malloc1" 00:05:06.556 } 00:05:06.556 ]' 00:05:06.556 20:26:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.556 20:26:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.556 /dev/nbd1' 00:05:06.556 20:26:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.556 20:26:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.556 /dev/nbd1' 00:05:06.556 20:26:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.556 20:26:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.556 20:26:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.556 20:26:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.556 20:26:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.556 20:26:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.556 20:26:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.556 20:26:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.556 20:26:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.556 20:26:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.556 20:26:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.556 256+0 records in 00:05:06.556 256+0 records out 00:05:06.556 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0057144 s, 183 MB/s 00:05:06.556 20:26:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.556 20:26:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.814 256+0 records in 00:05:06.814 256+0 records out 00:05:06.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254008 s, 41.3 MB/s 00:05:06.814 20:26:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.814 20:26:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.814 256+0 records in 00:05:06.814 256+0 records out 00:05:06.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267947 s, 39.1 MB/s 00:05:06.814 20:26:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.814 20:26:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.814 20:26:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.814 20:26:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.814 20:26:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.814 20:26:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.814 20:26:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.814 20:26:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.814 20:26:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.814 20:26:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.814 20:26:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.814 20:26:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.814 20:26:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.814 20:26:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.814 20:26:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.814 20:26:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.814 20:26:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:06.814 20:26:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.814 20:26:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:07.111 20:26:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:07.111 20:26:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:07.111 20:26:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:07.111 20:26:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.111 20:26:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.111 20:26:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:07.111 20:26:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.111 20:26:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.111 20:26:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.111 20:26:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:07.387 20:26:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:07.387 20:26:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:07.387 20:26:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:07.387 20:26:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.387 20:26:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.387 20:26:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:07.387 20:26:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.387 20:26:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.387 20:26:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.387 20:26:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.387 20:26:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.646 20:26:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:07.646 20:26:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:07.646 20:26:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.646 20:26:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:07.646 20:26:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:07.646 20:26:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.646 20:26:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:07.646 20:26:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:07.646 20:26:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:07.646 20:26:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:07.646 20:26:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:07.646 20:26:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:07.646 20:26:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.906 20:26:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:08.165 [2024-11-26 20:26:08.396338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.165 [2024-11-26 20:26:08.458259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.165 [2024-11-26 20:26:08.458262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.165 [2024-11-26 20:26:08.511764] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:08.165 [2024-11-26 20:26:08.511851] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:08.165 [2024-11-26 20:26:08.511866] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:11.454 20:26:11 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58201 /var/tmp/spdk-nbd.sock 00:05:11.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.454 20:26:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58201 ']' 00:05:11.454 20:26:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.454 20:26:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.454 20:26:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.454 20:26:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.454 20:26:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.454 20:26:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.454 20:26:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:11.454 20:26:11 event.app_repeat -- event/event.sh@39 -- # killprocess 58201 00:05:11.454 20:26:11 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58201 ']' 00:05:11.454 20:26:11 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58201 00:05:11.454 20:26:11 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:11.454 20:26:11 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.454 20:26:11 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58201 00:05:11.454 killing process with pid 58201 00:05:11.454 20:26:11 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.454 20:26:11 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.454 20:26:11 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58201' 00:05:11.454 20:26:11 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58201 00:05:11.454 20:26:11 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58201 00:05:11.454 spdk_app_start is called in Round 0. 00:05:11.454 Shutdown signal received, stop current app iteration 00:05:11.454 Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 reinitialization... 00:05:11.454 spdk_app_start is called in Round 1. 00:05:11.454 Shutdown signal received, stop current app iteration 00:05:11.454 Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 reinitialization... 00:05:11.454 spdk_app_start is called in Round 2. 00:05:11.454 Shutdown signal received, stop current app iteration 00:05:11.454 Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 reinitialization... 00:05:11.454 spdk_app_start is called in Round 3. 00:05:11.454 Shutdown signal received, stop current app iteration 00:05:11.454 20:26:11 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:11.454 20:26:11 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:11.454 00:05:11.454 real 0m19.491s 00:05:11.454 user 0m44.682s 00:05:11.454 sys 0m2.887s 00:05:11.454 20:26:11 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.454 20:26:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.454 ************************************ 00:05:11.454 END TEST app_repeat 00:05:11.454 ************************************ 00:05:11.713 20:26:11 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:11.713 20:26:11 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:11.713 20:26:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.713 20:26:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.713 20:26:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.713 ************************************ 00:05:11.713 START TEST cpu_locks 00:05:11.713 ************************************ 00:05:11.713 20:26:11 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:11.713 * Looking for test storage... 00:05:11.713 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:11.713 20:26:11 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:11.713 20:26:11 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:11.713 20:26:11 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:11.713 20:26:11 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.713 20:26:11 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:11.713 20:26:11 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.713 20:26:11 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:11.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.713 --rc genhtml_branch_coverage=1 00:05:11.713 --rc genhtml_function_coverage=1 00:05:11.713 --rc genhtml_legend=1 00:05:11.713 --rc geninfo_all_blocks=1 00:05:11.713 --rc geninfo_unexecuted_blocks=1 00:05:11.713 00:05:11.713 ' 00:05:11.713 20:26:11 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:11.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.713 --rc genhtml_branch_coverage=1 00:05:11.713 --rc genhtml_function_coverage=1 00:05:11.713 --rc genhtml_legend=1 00:05:11.713 --rc geninfo_all_blocks=1 00:05:11.713 --rc geninfo_unexecuted_blocks=1 00:05:11.713 00:05:11.713 ' 00:05:11.713 20:26:11 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:11.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.713 --rc genhtml_branch_coverage=1 00:05:11.713 --rc genhtml_function_coverage=1 00:05:11.713 --rc genhtml_legend=1 00:05:11.713 --rc geninfo_all_blocks=1 00:05:11.714 --rc geninfo_unexecuted_blocks=1 00:05:11.714 00:05:11.714 ' 00:05:11.714 20:26:11 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:11.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.714 --rc genhtml_branch_coverage=1 00:05:11.714 --rc genhtml_function_coverage=1 00:05:11.714 --rc genhtml_legend=1 00:05:11.714 --rc geninfo_all_blocks=1 00:05:11.714 --rc geninfo_unexecuted_blocks=1 00:05:11.714 00:05:11.714 ' 00:05:11.714 20:26:11 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:11.714 20:26:11 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:11.714 20:26:11 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:11.714 20:26:11 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:11.714 20:26:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.714 20:26:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.714 20:26:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.714 ************************************ 00:05:11.714 START TEST default_locks 00:05:11.714 ************************************ 00:05:11.714 20:26:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:11.714 20:26:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58651 00:05:11.714 20:26:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58651 00:05:11.714 20:26:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.714 20:26:11 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58651 ']' 00:05:11.714 20:26:11 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.714 20:26:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.714 20:26:11 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.714 20:26:11 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.714 20:26:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.972 [2024-11-26 20:26:12.088646] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:11.972 [2024-11-26 20:26:12.088770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58651 ] 00:05:11.972 [2024-11-26 20:26:12.236241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.972 [2024-11-26 20:26:12.300150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.231 [2024-11-26 20:26:12.375083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:12.231 20:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.231 20:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:12.231 20:26:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58651 00:05:12.489 20:26:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58651 00:05:12.489 20:26:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.748 20:26:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58651 00:05:12.748 20:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58651 ']' 00:05:12.748 20:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58651 00:05:12.748 20:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:12.748 20:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.748 20:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58651 00:05:12.748 20:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.748 killing process with pid 58651 00:05:12.748 20:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.748 20:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58651' 00:05:12.748 20:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58651 00:05:12.748 20:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58651 00:05:13.006 20:26:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58651 00:05:13.006 20:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:13.006 20:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58651 00:05:13.006 20:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:13.006 20:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.006 20:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:13.006 20:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.006 20:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58651 00:05:13.006 20:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58651 ']' 00:05:13.006 20:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.265 20:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.265 20:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.265 ERROR: process (pid: 58651) is no longer running 00:05:13.265 20:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.265 20:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.265 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58651) - No such process 00:05:13.265 20:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.265 20:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:13.265 20:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:13.265 20:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:13.265 20:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:13.265 20:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:13.265 20:26:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:13.265 20:26:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:13.265 ************************************ 00:05:13.265 END TEST default_locks 00:05:13.265 ************************************ 00:05:13.265 20:26:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:13.265 20:26:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:13.265 00:05:13.265 real 0m1.366s 00:05:13.265 user 0m1.348s 00:05:13.265 sys 0m0.493s 00:05:13.265 20:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.265 20:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.265 20:26:13 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:13.265 20:26:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.265 20:26:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.265 20:26:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.265 ************************************ 00:05:13.265 START TEST default_locks_via_rpc 00:05:13.265 ************************************ 00:05:13.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.265 20:26:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:13.265 20:26:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58695 00:05:13.265 20:26:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58695 00:05:13.265 20:26:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.265 20:26:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58695 ']' 00:05:13.265 20:26:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.265 20:26:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.265 20:26:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.265 20:26:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.265 20:26:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.265 [2024-11-26 20:26:13.474273] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:13.265 [2024-11-26 20:26:13.474377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58695 ] 00:05:13.524 [2024-11-26 20:26:13.628576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.524 [2024-11-26 20:26:13.707157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.524 [2024-11-26 20:26:13.784624] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:14.474 20:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.474 20:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:14.474 20:26:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:14.474 20:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.474 20:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.474 20:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.474 20:26:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:14.474 20:26:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:14.474 20:26:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:14.474 20:26:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:14.474 20:26:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:14.474 20:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.474 20:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.474 20:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.474 20:26:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58695 00:05:14.474 20:26:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:14.474 20:26:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58695 00:05:14.733 20:26:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58695 00:05:14.733 20:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58695 ']' 00:05:14.733 20:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58695 00:05:14.733 20:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:14.733 20:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.733 20:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58695 00:05:14.733 killing process with pid 58695 00:05:14.733 20:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.733 20:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.733 20:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58695' 00:05:14.733 20:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58695 00:05:14.733 20:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58695 00:05:15.300 00:05:15.300 real 0m1.938s 00:05:15.300 user 0m2.094s 00:05:15.300 sys 0m0.603s 00:05:15.300 20:26:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.300 ************************************ 00:05:15.300 END TEST default_locks_via_rpc 00:05:15.300 ************************************ 00:05:15.300 20:26:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.300 20:26:15 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:15.300 20:26:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.300 20:26:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.300 20:26:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.300 ************************************ 00:05:15.300 START TEST non_locking_app_on_locked_coremask 00:05:15.300 ************************************ 00:05:15.300 20:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:15.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.300 20:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58746 00:05:15.300 20:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58746 /var/tmp/spdk.sock 00:05:15.300 20:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.300 20:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58746 ']' 00:05:15.300 20:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.300 20:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.300 20:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.300 20:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.300 20:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.300 [2024-11-26 20:26:15.464027] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:15.300 [2024-11-26 20:26:15.464144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58746 ] 00:05:15.300 [2024-11-26 20:26:15.606913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.559 [2024-11-26 20:26:15.671475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.559 [2024-11-26 20:26:15.744546] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:15.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.818 20:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.818 20:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:15.818 20:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58755 00:05:15.818 20:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:15.818 20:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58755 /var/tmp/spdk2.sock 00:05:15.818 20:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58755 ']' 00:05:15.818 20:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.818 20:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.818 20:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.818 20:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.818 20:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.818 [2024-11-26 20:26:16.021547] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:15.818 [2024-11-26 20:26:16.021912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58755 ] 00:05:16.076 [2024-11-26 20:26:16.182959] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:16.076 [2024-11-26 20:26:16.183023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.076 [2024-11-26 20:26:16.312122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.335 [2024-11-26 20:26:16.468546] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:16.903 20:26:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.903 20:26:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:16.903 20:26:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58746 00:05:16.903 20:26:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58746 00:05:16.903 20:26:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.839 20:26:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58746 00:05:17.839 20:26:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58746 ']' 00:05:17.839 20:26:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58746 00:05:17.839 20:26:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:17.839 20:26:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.839 20:26:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58746 00:05:17.839 killing process with pid 58746 00:05:17.839 20:26:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.839 20:26:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.839 20:26:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58746' 00:05:17.839 20:26:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58746 00:05:17.839 20:26:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58746 00:05:18.414 20:26:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58755 00:05:18.414 20:26:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58755 ']' 00:05:18.414 20:26:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58755 00:05:18.414 20:26:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:18.414 20:26:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.414 20:26:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58755 00:05:18.414 killing process with pid 58755 00:05:18.414 20:26:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.414 20:26:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.414 20:26:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58755' 00:05:18.414 20:26:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58755 00:05:18.414 20:26:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58755 00:05:18.981 00:05:18.981 real 0m3.723s 00:05:18.981 user 0m4.118s 00:05:18.981 sys 0m1.106s 00:05:18.981 20:26:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.981 ************************************ 00:05:18.981 END TEST non_locking_app_on_locked_coremask 00:05:18.981 ************************************ 00:05:18.981 20:26:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.981 20:26:19 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:18.981 20:26:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.981 20:26:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.981 20:26:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.981 ************************************ 00:05:18.981 START TEST locking_app_on_unlocked_coremask 00:05:18.981 ************************************ 00:05:18.981 20:26:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:18.981 20:26:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58822 00:05:18.981 20:26:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58822 /var/tmp/spdk.sock 00:05:18.981 20:26:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58822 ']' 00:05:18.981 20:26:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:18.981 20:26:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.981 20:26:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.981 20:26:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.981 20:26:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.981 20:26:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.981 [2024-11-26 20:26:19.225841] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:18.981 [2024-11-26 20:26:19.225940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58822 ] 00:05:19.241 [2024-11-26 20:26:19.370918] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:19.241 [2024-11-26 20:26:19.370982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.241 [2024-11-26 20:26:19.439029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.241 [2024-11-26 20:26:19.537337] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:20.177 20:26:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.177 20:26:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:20.177 20:26:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58838 00:05:20.177 20:26:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58838 /var/tmp/spdk2.sock 00:05:20.177 20:26:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58838 ']' 00:05:20.177 20:26:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:20.177 20:26:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.177 20:26:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.177 20:26:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.177 20:26:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.177 20:26:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.177 [2024-11-26 20:26:20.305200] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:20.177 [2024-11-26 20:26:20.305877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58838 ] 00:05:20.177 [2024-11-26 20:26:20.466327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.436 [2024-11-26 20:26:20.594718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.436 [2024-11-26 20:26:20.747122] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:21.001 20:26:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.001 20:26:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:21.001 20:26:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58838 00:05:21.002 20:26:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58838 00:05:21.002 20:26:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.956 20:26:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58822 00:05:21.956 20:26:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58822 ']' 00:05:21.956 20:26:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58822 00:05:21.956 20:26:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:21.956 20:26:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.956 20:26:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58822 00:05:21.956 20:26:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.956 20:26:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.956 killing process with pid 58822 00:05:21.956 20:26:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58822' 00:05:21.956 20:26:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58822 00:05:21.956 20:26:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58822 00:05:22.552 20:26:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58838 00:05:22.552 20:26:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58838 ']' 00:05:22.552 20:26:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58838 00:05:22.552 20:26:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:22.552 20:26:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.553 20:26:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58838 00:05:22.553 20:26:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.553 killing process with pid 58838 00:05:22.553 20:26:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.553 20:26:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58838' 00:05:22.553 20:26:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58838 00:05:22.553 20:26:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58838 00:05:23.118 00:05:23.118 real 0m4.109s 00:05:23.118 user 0m4.645s 00:05:23.118 sys 0m1.054s 00:05:23.119 ************************************ 00:05:23.119 END TEST locking_app_on_unlocked_coremask 00:05:23.119 ************************************ 00:05:23.119 20:26:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.119 20:26:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.119 20:26:23 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:23.119 20:26:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.119 20:26:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.119 20:26:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.119 ************************************ 00:05:23.119 START TEST locking_app_on_locked_coremask 00:05:23.119 ************************************ 00:05:23.119 20:26:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:23.119 20:26:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58905 00:05:23.119 20:26:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58905 /var/tmp/spdk.sock 00:05:23.119 20:26:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58905 ']' 00:05:23.119 20:26:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.119 20:26:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.119 20:26:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.119 20:26:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.119 20:26:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.119 20:26:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.119 [2024-11-26 20:26:23.399475] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:23.119 [2024-11-26 20:26:23.399591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58905 ] 00:05:23.377 [2024-11-26 20:26:23.543836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.377 [2024-11-26 20:26:23.608266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.377 [2024-11-26 20:26:23.684876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:24.310 20:26:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.310 20:26:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:24.310 20:26:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58921 00:05:24.310 20:26:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58921 /var/tmp/spdk2.sock 00:05:24.310 20:26:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:24.310 20:26:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:24.310 20:26:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58921 /var/tmp/spdk2.sock 00:05:24.310 20:26:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:24.310 20:26:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.310 20:26:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:24.310 20:26:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.310 20:26:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58921 /var/tmp/spdk2.sock 00:05:24.310 20:26:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58921 ']' 00:05:24.310 20:26:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.310 20:26:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.310 20:26:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.310 20:26:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.310 20:26:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.310 [2024-11-26 20:26:24.506211] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:24.310 [2024-11-26 20:26:24.506338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58921 ] 00:05:24.567 [2024-11-26 20:26:24.667175] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58905 has claimed it. 00:05:24.567 [2024-11-26 20:26:24.667268] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:25.132 ERROR: process (pid: 58921) is no longer running 00:05:25.132 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58921) - No such process 00:05:25.132 20:26:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.132 20:26:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:25.132 20:26:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:25.132 20:26:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:25.132 20:26:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:25.132 20:26:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:25.132 20:26:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58905 00:05:25.132 20:26:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58905 00:05:25.132 20:26:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.391 20:26:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58905 00:05:25.391 20:26:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58905 ']' 00:05:25.391 20:26:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58905 00:05:25.391 20:26:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:25.391 20:26:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.391 20:26:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58905 00:05:25.391 20:26:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.391 20:26:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.391 killing process with pid 58905 00:05:25.391 20:26:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58905' 00:05:25.391 20:26:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58905 00:05:25.391 20:26:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58905 00:05:25.649 00:05:25.649 real 0m2.666s 00:05:25.649 user 0m3.124s 00:05:25.649 sys 0m0.647s 00:05:25.649 ************************************ 00:05:25.649 END TEST locking_app_on_locked_coremask 00:05:25.649 ************************************ 00:05:25.649 20:26:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.649 20:26:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.907 20:26:26 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:25.907 20:26:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.907 20:26:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.907 20:26:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.907 ************************************ 00:05:25.907 START TEST locking_overlapped_coremask 00:05:25.907 ************************************ 00:05:25.907 20:26:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:25.907 20:26:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58966 00:05:25.907 20:26:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:25.907 20:26:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58966 /var/tmp/spdk.sock 00:05:25.907 20:26:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58966 ']' 00:05:25.907 20:26:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.907 20:26:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.907 20:26:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.907 20:26:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.907 20:26:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.907 [2024-11-26 20:26:26.134355] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:25.907 [2024-11-26 20:26:26.134459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58966 ] 00:05:26.166 [2024-11-26 20:26:26.279663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.166 [2024-11-26 20:26:26.348941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.166 [2024-11-26 20:26:26.349035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.166 [2024-11-26 20:26:26.349037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.166 [2024-11-26 20:26:26.422636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:27.099 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.099 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:27.099 20:26:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:27.099 20:26:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58990 00:05:27.099 20:26:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58990 /var/tmp/spdk2.sock 00:05:27.099 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:27.099 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58990 /var/tmp/spdk2.sock 00:05:27.099 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:27.099 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.099 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:27.099 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.099 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58990 /var/tmp/spdk2.sock 00:05:27.099 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58990 ']' 00:05:27.099 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.099 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.099 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.099 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.099 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.099 [2024-11-26 20:26:27.184212] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:27.099 [2024-11-26 20:26:27.184315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58990 ] 00:05:27.099 [2024-11-26 20:26:27.353198] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58966 has claimed it. 00:05:27.099 [2024-11-26 20:26:27.357336] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:27.664 ERROR: process (pid: 58990) is no longer running 00:05:27.664 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58990) - No such process 00:05:27.664 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.664 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:27.664 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:27.664 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:27.664 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:27.664 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:27.664 20:26:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:27.664 20:26:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:27.664 20:26:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:27.664 20:26:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:27.664 20:26:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58966 00:05:27.664 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 58966 ']' 00:05:27.664 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 58966 00:05:27.664 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:27.664 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.664 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58966 00:05:27.664 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.664 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.664 killing process with pid 58966 00:05:27.664 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58966' 00:05:27.664 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 58966 00:05:27.664 20:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 58966 00:05:28.230 00:05:28.231 real 0m2.331s 00:05:28.231 user 0m6.647s 00:05:28.231 sys 0m0.455s 00:05:28.231 ************************************ 00:05:28.231 END TEST locking_overlapped_coremask 00:05:28.231 ************************************ 00:05:28.231 20:26:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.231 20:26:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.231 20:26:28 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:28.231 20:26:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.231 20:26:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.231 20:26:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.231 ************************************ 00:05:28.231 START TEST locking_overlapped_coremask_via_rpc 00:05:28.231 ************************************ 00:05:28.231 20:26:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:28.231 20:26:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59030 00:05:28.231 20:26:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59030 /var/tmp/spdk.sock 00:05:28.231 20:26:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:28.231 20:26:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59030 ']' 00:05:28.231 20:26:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.231 20:26:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.231 20:26:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.231 20:26:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.231 20:26:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.231 [2024-11-26 20:26:28.491465] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:28.231 [2024-11-26 20:26:28.491570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59030 ] 00:05:28.488 [2024-11-26 20:26:28.635555] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.488 [2024-11-26 20:26:28.635602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.488 [2024-11-26 20:26:28.698160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.488 [2024-11-26 20:26:28.698235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.488 [2024-11-26 20:26:28.698242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.488 [2024-11-26 20:26:28.768844] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:29.437 20:26:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.437 20:26:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:29.437 20:26:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59048 00:05:29.437 20:26:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:29.437 20:26:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59048 /var/tmp/spdk2.sock 00:05:29.437 20:26:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59048 ']' 00:05:29.437 20:26:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.437 20:26:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.437 20:26:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.437 20:26:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.437 20:26:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.437 [2024-11-26 20:26:29.604567] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:29.437 [2024-11-26 20:26:29.604720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59048 ] 00:05:29.437 [2024-11-26 20:26:29.778619] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.437 [2024-11-26 20:26:29.778677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:29.696 [2024-11-26 20:26:29.912731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.696 [2024-11-26 20:26:29.912808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:29.696 [2024-11-26 20:26:29.912811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.955 [2024-11-26 20:26:30.061268] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.522 [2024-11-26 20:26:30.637363] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59030 has claimed it. 00:05:30.522 request: 00:05:30.522 { 00:05:30.522 "method": "framework_enable_cpumask_locks", 00:05:30.522 "req_id": 1 00:05:30.522 } 00:05:30.522 Got JSON-RPC error response 00:05:30.522 response: 00:05:30.522 { 00:05:30.522 "code": -32603, 00:05:30.522 "message": "Failed to claim CPU core: 2" 00:05:30.522 } 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:30.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59030 /var/tmp/spdk.sock 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59030 ']' 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.522 20:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.780 20:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.780 20:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:30.780 20:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59048 /var/tmp/spdk2.sock 00:05:30.780 20:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59048 ']' 00:05:30.780 20:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.780 20:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.780 20:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.780 20:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.780 20:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.038 20:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.038 ************************************ 00:05:31.038 END TEST locking_overlapped_coremask_via_rpc 00:05:31.038 ************************************ 00:05:31.038 20:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:31.038 20:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:31.038 20:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:31.038 20:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:31.039 20:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:31.039 00:05:31.039 real 0m2.879s 00:05:31.039 user 0m1.593s 00:05:31.039 sys 0m0.209s 00:05:31.039 20:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.039 20:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.039 20:26:31 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:31.039 20:26:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59030 ]] 00:05:31.039 20:26:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59030 00:05:31.039 20:26:31 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59030 ']' 00:05:31.039 20:26:31 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59030 00:05:31.039 20:26:31 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:31.039 20:26:31 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.039 20:26:31 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59030 00:05:31.039 killing process with pid 59030 00:05:31.039 20:26:31 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.039 20:26:31 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.039 20:26:31 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59030' 00:05:31.039 20:26:31 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59030 00:05:31.039 20:26:31 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59030 00:05:31.606 20:26:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59048 ]] 00:05:31.606 20:26:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59048 00:05:31.606 20:26:31 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59048 ']' 00:05:31.606 20:26:31 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59048 00:05:31.606 20:26:31 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:31.606 20:26:31 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.606 20:26:31 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59048 00:05:31.606 killing process with pid 59048 00:05:31.606 20:26:31 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:31.606 20:26:31 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:31.606 20:26:31 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59048' 00:05:31.606 20:26:31 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59048 00:05:31.606 20:26:31 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59048 00:05:31.865 20:26:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:31.865 20:26:32 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:31.865 20:26:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59030 ]] 00:05:31.865 20:26:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59030 00:05:31.865 20:26:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59030 ']' 00:05:31.865 20:26:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59030 00:05:31.865 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59030) - No such process 00:05:31.865 Process with pid 59030 is not found 00:05:31.865 20:26:32 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59030 is not found' 00:05:31.865 20:26:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59048 ]] 00:05:31.865 20:26:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59048 00:05:31.865 20:26:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59048 ']' 00:05:31.865 20:26:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59048 00:05:31.865 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59048) - No such process 00:05:31.865 Process with pid 59048 is not found 00:05:31.865 20:26:32 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59048 is not found' 00:05:31.865 20:26:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:31.865 00:05:31.865 real 0m20.381s 00:05:31.865 user 0m37.386s 00:05:31.865 sys 0m5.456s 00:05:31.865 20:26:32 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.866 20:26:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.866 ************************************ 00:05:31.866 END TEST cpu_locks 00:05:31.866 ************************************ 00:05:32.125 00:05:32.125 real 0m48.901s 00:05:32.125 user 1m36.819s 00:05:32.125 sys 0m9.116s 00:05:32.125 ************************************ 00:05:32.125 END TEST event 00:05:32.125 ************************************ 00:05:32.125 20:26:32 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.125 20:26:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.125 20:26:32 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:32.125 20:26:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.125 20:26:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.125 20:26:32 -- common/autotest_common.sh@10 -- # set +x 00:05:32.125 ************************************ 00:05:32.125 START TEST thread 00:05:32.125 ************************************ 00:05:32.125 20:26:32 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:32.125 * Looking for test storage... 00:05:32.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:32.125 20:26:32 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:32.125 20:26:32 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:32.125 20:26:32 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:32.125 20:26:32 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:32.125 20:26:32 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.125 20:26:32 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.125 20:26:32 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.125 20:26:32 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.125 20:26:32 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.125 20:26:32 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.125 20:26:32 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.125 20:26:32 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.125 20:26:32 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.125 20:26:32 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.125 20:26:32 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.125 20:26:32 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:32.125 20:26:32 thread -- scripts/common.sh@345 -- # : 1 00:05:32.125 20:26:32 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.125 20:26:32 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.125 20:26:32 thread -- scripts/common.sh@365 -- # decimal 1 00:05:32.125 20:26:32 thread -- scripts/common.sh@353 -- # local d=1 00:05:32.125 20:26:32 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.125 20:26:32 thread -- scripts/common.sh@355 -- # echo 1 00:05:32.125 20:26:32 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.125 20:26:32 thread -- scripts/common.sh@366 -- # decimal 2 00:05:32.126 20:26:32 thread -- scripts/common.sh@353 -- # local d=2 00:05:32.126 20:26:32 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.126 20:26:32 thread -- scripts/common.sh@355 -- # echo 2 00:05:32.126 20:26:32 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.384 20:26:32 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.384 20:26:32 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.384 20:26:32 thread -- scripts/common.sh@368 -- # return 0 00:05:32.384 20:26:32 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.384 20:26:32 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:32.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.384 --rc genhtml_branch_coverage=1 00:05:32.384 --rc genhtml_function_coverage=1 00:05:32.384 --rc genhtml_legend=1 00:05:32.384 --rc geninfo_all_blocks=1 00:05:32.384 --rc geninfo_unexecuted_blocks=1 00:05:32.384 00:05:32.384 ' 00:05:32.384 20:26:32 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:32.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.384 --rc genhtml_branch_coverage=1 00:05:32.384 --rc genhtml_function_coverage=1 00:05:32.384 --rc genhtml_legend=1 00:05:32.384 --rc geninfo_all_blocks=1 00:05:32.384 --rc geninfo_unexecuted_blocks=1 00:05:32.384 00:05:32.384 ' 00:05:32.384 20:26:32 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:32.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.384 --rc genhtml_branch_coverage=1 00:05:32.384 --rc genhtml_function_coverage=1 00:05:32.384 --rc genhtml_legend=1 00:05:32.384 --rc geninfo_all_blocks=1 00:05:32.384 --rc geninfo_unexecuted_blocks=1 00:05:32.384 00:05:32.384 ' 00:05:32.384 20:26:32 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:32.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.384 --rc genhtml_branch_coverage=1 00:05:32.384 --rc genhtml_function_coverage=1 00:05:32.384 --rc genhtml_legend=1 00:05:32.384 --rc geninfo_all_blocks=1 00:05:32.384 --rc geninfo_unexecuted_blocks=1 00:05:32.384 00:05:32.384 ' 00:05:32.384 20:26:32 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:32.384 20:26:32 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:32.384 20:26:32 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.384 20:26:32 thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.384 ************************************ 00:05:32.384 START TEST thread_poller_perf 00:05:32.384 ************************************ 00:05:32.384 20:26:32 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:32.384 [2024-11-26 20:26:32.513716] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:32.384 [2024-11-26 20:26:32.514040] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59184 ] 00:05:32.384 [2024-11-26 20:26:32.660653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.384 [2024-11-26 20:26:32.722467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.384 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:33.822 [2024-11-26T20:26:34.177Z] ====================================== 00:05:33.822 [2024-11-26T20:26:34.177Z] busy:2210173340 (cyc) 00:05:33.822 [2024-11-26T20:26:34.177Z] total_run_count: 318000 00:05:33.822 [2024-11-26T20:26:34.177Z] tsc_hz: 2200000000 (cyc) 00:05:33.822 [2024-11-26T20:26:34.177Z] ====================================== 00:05:33.822 [2024-11-26T20:26:34.177Z] poller_cost: 6950 (cyc), 3159 (nsec) 00:05:33.822 00:05:33.822 real 0m1.290s 00:05:33.822 user 0m1.135s 00:05:33.822 sys 0m0.045s 00:05:33.822 20:26:33 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.822 20:26:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:33.822 ************************************ 00:05:33.822 END TEST thread_poller_perf 00:05:33.822 ************************************ 00:05:33.822 20:26:33 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:33.822 20:26:33 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:33.822 20:26:33 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.822 20:26:33 thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.822 ************************************ 00:05:33.822 START TEST thread_poller_perf 00:05:33.822 ************************************ 00:05:33.822 20:26:33 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:33.822 [2024-11-26 20:26:33.858623] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:33.822 [2024-11-26 20:26:33.858709] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59214 ] 00:05:33.822 [2024-11-26 20:26:34.003704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.822 [2024-11-26 20:26:34.067258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.822 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:35.227 [2024-11-26T20:26:35.582Z] ====================================== 00:05:35.227 [2024-11-26T20:26:35.582Z] busy:2202058063 (cyc) 00:05:35.227 [2024-11-26T20:26:35.582Z] total_run_count: 4154000 00:05:35.227 [2024-11-26T20:26:35.582Z] tsc_hz: 2200000000 (cyc) 00:05:35.227 [2024-11-26T20:26:35.582Z] ====================================== 00:05:35.227 [2024-11-26T20:26:35.582Z] poller_cost: 530 (cyc), 240 (nsec) 00:05:35.227 00:05:35.227 real 0m1.279s 00:05:35.227 user 0m1.133s 00:05:35.227 sys 0m0.038s 00:05:35.227 ************************************ 00:05:35.227 END TEST thread_poller_perf 00:05:35.227 ************************************ 00:05:35.227 20:26:35 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.227 20:26:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:35.227 20:26:35 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:35.227 ************************************ 00:05:35.227 END TEST thread 00:05:35.227 ************************************ 00:05:35.227 00:05:35.227 real 0m2.865s 00:05:35.227 user 0m2.409s 00:05:35.227 sys 0m0.232s 00:05:35.227 20:26:35 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.227 20:26:35 thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.227 20:26:35 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:35.227 20:26:35 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:35.227 20:26:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.227 20:26:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.227 20:26:35 -- common/autotest_common.sh@10 -- # set +x 00:05:35.227 ************************************ 00:05:35.227 START TEST app_cmdline 00:05:35.227 ************************************ 00:05:35.227 20:26:35 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:35.227 * Looking for test storage... 00:05:35.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:35.227 20:26:35 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.227 20:26:35 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.227 20:26:35 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.227 20:26:35 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.227 20:26:35 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:35.227 20:26:35 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.227 20:26:35 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.227 --rc genhtml_branch_coverage=1 00:05:35.227 --rc genhtml_function_coverage=1 00:05:35.227 --rc genhtml_legend=1 00:05:35.227 --rc geninfo_all_blocks=1 00:05:35.227 --rc geninfo_unexecuted_blocks=1 00:05:35.227 00:05:35.227 ' 00:05:35.227 20:26:35 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.227 --rc genhtml_branch_coverage=1 00:05:35.227 --rc genhtml_function_coverage=1 00:05:35.227 --rc genhtml_legend=1 00:05:35.227 --rc geninfo_all_blocks=1 00:05:35.227 --rc geninfo_unexecuted_blocks=1 00:05:35.227 00:05:35.227 ' 00:05:35.227 20:26:35 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.227 --rc genhtml_branch_coverage=1 00:05:35.227 --rc genhtml_function_coverage=1 00:05:35.227 --rc genhtml_legend=1 00:05:35.227 --rc geninfo_all_blocks=1 00:05:35.228 --rc geninfo_unexecuted_blocks=1 00:05:35.228 00:05:35.228 ' 00:05:35.228 20:26:35 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.228 --rc genhtml_branch_coverage=1 00:05:35.228 --rc genhtml_function_coverage=1 00:05:35.228 --rc genhtml_legend=1 00:05:35.228 --rc geninfo_all_blocks=1 00:05:35.228 --rc geninfo_unexecuted_blocks=1 00:05:35.228 00:05:35.228 ' 00:05:35.228 20:26:35 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:35.228 20:26:35 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59302 00:05:35.228 20:26:35 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59302 00:05:35.228 20:26:35 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59302 ']' 00:05:35.228 20:26:35 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:35.228 20:26:35 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.228 20:26:35 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.228 20:26:35 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.228 20:26:35 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.228 20:26:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:35.228 [2024-11-26 20:26:35.472280] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:35.228 [2024-11-26 20:26:35.472600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59302 ] 00:05:35.487 [2024-11-26 20:26:35.618338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.487 [2024-11-26 20:26:35.683239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.487 [2024-11-26 20:26:35.762176] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:35.746 20:26:35 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.746 20:26:35 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:35.746 20:26:35 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:36.003 { 00:05:36.003 "version": "SPDK v25.01-pre git sha1 5ca6db5da", 00:05:36.003 "fields": { 00:05:36.003 "major": 25, 00:05:36.003 "minor": 1, 00:05:36.003 "patch": 0, 00:05:36.003 "suffix": "-pre", 00:05:36.003 "commit": "5ca6db5da" 00:05:36.003 } 00:05:36.003 } 00:05:36.003 20:26:36 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:36.003 20:26:36 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:36.003 20:26:36 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:36.003 20:26:36 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:36.003 20:26:36 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:36.003 20:26:36 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:36.003 20:26:36 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:36.003 20:26:36 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.003 20:26:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:36.003 20:26:36 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.003 20:26:36 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:36.003 20:26:36 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:36.003 20:26:36 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:36.003 20:26:36 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:36.003 20:26:36 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:36.003 20:26:36 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:36.003 20:26:36 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.003 20:26:36 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:36.003 20:26:36 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.003 20:26:36 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:36.003 20:26:36 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.003 20:26:36 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:36.003 20:26:36 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:36.003 20:26:36 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:36.260 request: 00:05:36.260 { 00:05:36.260 "method": "env_dpdk_get_mem_stats", 00:05:36.260 "req_id": 1 00:05:36.260 } 00:05:36.260 Got JSON-RPC error response 00:05:36.261 response: 00:05:36.261 { 00:05:36.261 "code": -32601, 00:05:36.261 "message": "Method not found" 00:05:36.261 } 00:05:36.261 20:26:36 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:36.261 20:26:36 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:36.261 20:26:36 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:36.261 20:26:36 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:36.261 20:26:36 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59302 00:05:36.261 20:26:36 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59302 ']' 00:05:36.261 20:26:36 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59302 00:05:36.261 20:26:36 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:36.261 20:26:36 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.261 20:26:36 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59302 00:05:36.518 20:26:36 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.518 20:26:36 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.518 killing process with pid 59302 00:05:36.518 20:26:36 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59302' 00:05:36.518 20:26:36 app_cmdline -- common/autotest_common.sh@973 -- # kill 59302 00:05:36.518 20:26:36 app_cmdline -- common/autotest_common.sh@978 -- # wait 59302 00:05:36.776 ************************************ 00:05:36.776 END TEST app_cmdline 00:05:36.776 ************************************ 00:05:36.776 00:05:36.776 real 0m1.813s 00:05:36.776 user 0m2.182s 00:05:36.776 sys 0m0.494s 00:05:36.776 20:26:37 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.776 20:26:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:36.776 20:26:37 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:36.776 20:26:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.776 20:26:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.776 20:26:37 -- common/autotest_common.sh@10 -- # set +x 00:05:36.776 ************************************ 00:05:36.776 START TEST version 00:05:36.776 ************************************ 00:05:36.776 20:26:37 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:37.047 * Looking for test storage... 00:05:37.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:37.047 20:26:37 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:37.047 20:26:37 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:37.047 20:26:37 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:37.047 20:26:37 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:37.047 20:26:37 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.047 20:26:37 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.047 20:26:37 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.047 20:26:37 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.047 20:26:37 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.047 20:26:37 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.047 20:26:37 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.047 20:26:37 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.047 20:26:37 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.047 20:26:37 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.047 20:26:37 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.047 20:26:37 version -- scripts/common.sh@344 -- # case "$op" in 00:05:37.047 20:26:37 version -- scripts/common.sh@345 -- # : 1 00:05:37.047 20:26:37 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.047 20:26:37 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.047 20:26:37 version -- scripts/common.sh@365 -- # decimal 1 00:05:37.047 20:26:37 version -- scripts/common.sh@353 -- # local d=1 00:05:37.047 20:26:37 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.047 20:26:37 version -- scripts/common.sh@355 -- # echo 1 00:05:37.047 20:26:37 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.047 20:26:37 version -- scripts/common.sh@366 -- # decimal 2 00:05:37.047 20:26:37 version -- scripts/common.sh@353 -- # local d=2 00:05:37.047 20:26:37 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.047 20:26:37 version -- scripts/common.sh@355 -- # echo 2 00:05:37.047 20:26:37 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.047 20:26:37 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.047 20:26:37 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.047 20:26:37 version -- scripts/common.sh@368 -- # return 0 00:05:37.047 20:26:37 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.047 20:26:37 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:37.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.047 --rc genhtml_branch_coverage=1 00:05:37.047 --rc genhtml_function_coverage=1 00:05:37.047 --rc genhtml_legend=1 00:05:37.047 --rc geninfo_all_blocks=1 00:05:37.047 --rc geninfo_unexecuted_blocks=1 00:05:37.047 00:05:37.047 ' 00:05:37.047 20:26:37 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:37.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.047 --rc genhtml_branch_coverage=1 00:05:37.047 --rc genhtml_function_coverage=1 00:05:37.047 --rc genhtml_legend=1 00:05:37.047 --rc geninfo_all_blocks=1 00:05:37.047 --rc geninfo_unexecuted_blocks=1 00:05:37.047 00:05:37.047 ' 00:05:37.047 20:26:37 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:37.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.047 --rc genhtml_branch_coverage=1 00:05:37.047 --rc genhtml_function_coverage=1 00:05:37.047 --rc genhtml_legend=1 00:05:37.047 --rc geninfo_all_blocks=1 00:05:37.047 --rc geninfo_unexecuted_blocks=1 00:05:37.047 00:05:37.047 ' 00:05:37.047 20:26:37 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:37.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.047 --rc genhtml_branch_coverage=1 00:05:37.047 --rc genhtml_function_coverage=1 00:05:37.047 --rc genhtml_legend=1 00:05:37.047 --rc geninfo_all_blocks=1 00:05:37.047 --rc geninfo_unexecuted_blocks=1 00:05:37.047 00:05:37.047 ' 00:05:37.047 20:26:37 version -- app/version.sh@17 -- # get_header_version major 00:05:37.047 20:26:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:37.047 20:26:37 version -- app/version.sh@14 -- # cut -f2 00:05:37.047 20:26:37 version -- app/version.sh@14 -- # tr -d '"' 00:05:37.047 20:26:37 version -- app/version.sh@17 -- # major=25 00:05:37.047 20:26:37 version -- app/version.sh@18 -- # get_header_version minor 00:05:37.047 20:26:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:37.047 20:26:37 version -- app/version.sh@14 -- # cut -f2 00:05:37.047 20:26:37 version -- app/version.sh@14 -- # tr -d '"' 00:05:37.047 20:26:37 version -- app/version.sh@18 -- # minor=1 00:05:37.047 20:26:37 version -- app/version.sh@19 -- # get_header_version patch 00:05:37.047 20:26:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:37.047 20:26:37 version -- app/version.sh@14 -- # cut -f2 00:05:37.047 20:26:37 version -- app/version.sh@14 -- # tr -d '"' 00:05:37.047 20:26:37 version -- app/version.sh@19 -- # patch=0 00:05:37.047 20:26:37 version -- app/version.sh@20 -- # get_header_version suffix 00:05:37.047 20:26:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:37.047 20:26:37 version -- app/version.sh@14 -- # tr -d '"' 00:05:37.047 20:26:37 version -- app/version.sh@14 -- # cut -f2 00:05:37.047 20:26:37 version -- app/version.sh@20 -- # suffix=-pre 00:05:37.047 20:26:37 version -- app/version.sh@22 -- # version=25.1 00:05:37.047 20:26:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:37.047 20:26:37 version -- app/version.sh@28 -- # version=25.1rc0 00:05:37.047 20:26:37 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:37.047 20:26:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:37.047 20:26:37 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:37.047 20:26:37 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:37.047 00:05:37.047 real 0m0.268s 00:05:37.047 user 0m0.164s 00:05:37.047 sys 0m0.137s 00:05:37.047 ************************************ 00:05:37.047 END TEST version 00:05:37.047 ************************************ 00:05:37.047 20:26:37 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.047 20:26:37 version -- common/autotest_common.sh@10 -- # set +x 00:05:37.047 20:26:37 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:37.047 20:26:37 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:37.047 20:26:37 -- spdk/autotest.sh@194 -- # uname -s 00:05:37.326 20:26:37 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:37.326 20:26:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:37.326 20:26:37 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:05:37.326 20:26:37 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:05:37.326 20:26:37 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:37.326 20:26:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.327 20:26:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.327 20:26:37 -- common/autotest_common.sh@10 -- # set +x 00:05:37.327 ************************************ 00:05:37.327 START TEST spdk_dd 00:05:37.327 ************************************ 00:05:37.327 20:26:37 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:37.327 * Looking for test storage... 00:05:37.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:37.327 20:26:37 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:37.327 20:26:37 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:05:37.327 20:26:37 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:37.327 20:26:37 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@345 -- # : 1 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@368 -- # return 0 00:05:37.327 20:26:37 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.327 20:26:37 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:37.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.327 --rc genhtml_branch_coverage=1 00:05:37.327 --rc genhtml_function_coverage=1 00:05:37.327 --rc genhtml_legend=1 00:05:37.327 --rc geninfo_all_blocks=1 00:05:37.327 --rc geninfo_unexecuted_blocks=1 00:05:37.327 00:05:37.327 ' 00:05:37.327 20:26:37 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:37.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.327 --rc genhtml_branch_coverage=1 00:05:37.327 --rc genhtml_function_coverage=1 00:05:37.327 --rc genhtml_legend=1 00:05:37.327 --rc geninfo_all_blocks=1 00:05:37.327 --rc geninfo_unexecuted_blocks=1 00:05:37.327 00:05:37.327 ' 00:05:37.327 20:26:37 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:37.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.327 --rc genhtml_branch_coverage=1 00:05:37.327 --rc genhtml_function_coverage=1 00:05:37.327 --rc genhtml_legend=1 00:05:37.327 --rc geninfo_all_blocks=1 00:05:37.327 --rc geninfo_unexecuted_blocks=1 00:05:37.327 00:05:37.327 ' 00:05:37.327 20:26:37 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:37.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.327 --rc genhtml_branch_coverage=1 00:05:37.327 --rc genhtml_function_coverage=1 00:05:37.327 --rc genhtml_legend=1 00:05:37.327 --rc geninfo_all_blocks=1 00:05:37.327 --rc geninfo_unexecuted_blocks=1 00:05:37.327 00:05:37.327 ' 00:05:37.327 20:26:37 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:37.327 20:26:37 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:37.327 20:26:37 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.327 20:26:37 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.327 20:26:37 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.327 20:26:37 spdk_dd -- paths/export.sh@5 -- # export PATH 00:05:37.327 20:26:37 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.327 20:26:37 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:37.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:37.844 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:37.844 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:37.844 20:26:37 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:37.844 20:26:38 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@233 -- # local class 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@235 -- # local progif 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@236 -- # class=01 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:37.844 20:26:38 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:05:37.845 20:26:38 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:37.845 20:26:38 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:37.845 20:26:38 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:37.845 20:26:38 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:37.845 20:26:38 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:05:37.845 20:26:38 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:37.845 20:26:38 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:37.845 20:26:38 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:37.845 20:26:38 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:05:37.845 20:26:38 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:37.845 20:26:38 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@139 -- # local lib 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.845 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:37.846 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.846 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:37.846 20:26:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.846 20:26:38 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:37.846 20:26:38 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:37.846 * spdk_dd linked to liburing 00:05:37.846 20:26:38 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:37.846 20:26:38 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:05:37.846 20:26:38 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:05:37.846 20:26:38 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:05:37.846 20:26:38 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:05:37.846 20:26:38 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:05:37.846 20:26:38 spdk_dd -- dd/common.sh@153 -- # return 0 00:05:37.846 20:26:38 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:37.846 20:26:38 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:37.846 20:26:38 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:37.846 20:26:38 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.846 20:26:38 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:37.846 ************************************ 00:05:37.846 START TEST spdk_dd_basic_rw 00:05:37.846 ************************************ 00:05:37.846 20:26:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:37.846 * Looking for test storage... 00:05:37.846 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:37.846 20:26:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:37.846 20:26:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:05:37.846 20:26:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:38.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.104 --rc genhtml_branch_coverage=1 00:05:38.104 --rc genhtml_function_coverage=1 00:05:38.104 --rc genhtml_legend=1 00:05:38.104 --rc geninfo_all_blocks=1 00:05:38.104 --rc geninfo_unexecuted_blocks=1 00:05:38.104 00:05:38.104 ' 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:38.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.104 --rc genhtml_branch_coverage=1 00:05:38.104 --rc genhtml_function_coverage=1 00:05:38.104 --rc genhtml_legend=1 00:05:38.104 --rc geninfo_all_blocks=1 00:05:38.104 --rc geninfo_unexecuted_blocks=1 00:05:38.104 00:05:38.104 ' 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:38.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.104 --rc genhtml_branch_coverage=1 00:05:38.104 --rc genhtml_function_coverage=1 00:05:38.104 --rc genhtml_legend=1 00:05:38.104 --rc geninfo_all_blocks=1 00:05:38.104 --rc geninfo_unexecuted_blocks=1 00:05:38.104 00:05:38.104 ' 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:38.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.104 --rc genhtml_branch_coverage=1 00:05:38.104 --rc genhtml_function_coverage=1 00:05:38.104 --rc genhtml_legend=1 00:05:38.104 --rc geninfo_all_blocks=1 00:05:38.104 --rc geninfo_unexecuted_blocks=1 00:05:38.104 00:05:38.104 ' 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:05:38.104 20:26:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:05:38.363 20:26:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:05:38.363 20:26:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:05:38.364 20:26:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:05:38.364 20:26:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:05:38.364 20:26:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:05:38.364 20:26:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:05:38.364 20:26:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:38.364 20:26:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:05:38.364 20:26:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:38.364 20:26:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:38.364 20:26:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.364 20:26:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:38.364 20:26:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:38.364 20:26:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:05:38.364 ************************************ 00:05:38.364 START TEST dd_bs_lt_native_bs 00:05:38.364 ************************************ 00:05:38.364 20:26:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:38.364 20:26:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:05:38.364 20:26:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:38.364 20:26:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:38.364 20:26:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.364 20:26:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:38.364 20:26:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.364 20:26:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:38.364 20:26:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.364 20:26:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:38.364 20:26:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:38.364 20:26:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:38.364 [2024-11-26 20:26:38.609845] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:38.364 [2024-11-26 20:26:38.610020] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59646 ] 00:05:38.364 { 00:05:38.364 "subsystems": [ 00:05:38.364 { 00:05:38.364 "subsystem": "bdev", 00:05:38.364 "config": [ 00:05:38.364 { 00:05:38.364 "params": { 00:05:38.364 "trtype": "pcie", 00:05:38.364 "traddr": "0000:00:10.0", 00:05:38.364 "name": "Nvme0" 00:05:38.364 }, 00:05:38.364 "method": "bdev_nvme_attach_controller" 00:05:38.364 }, 00:05:38.364 { 00:05:38.364 "method": "bdev_wait_for_examine" 00:05:38.364 } 00:05:38.364 ] 00:05:38.364 } 00:05:38.364 ] 00:05:38.364 } 00:05:38.621 [2024-11-26 20:26:38.767398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.621 [2024-11-26 20:26:38.834637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.621 [2024-11-26 20:26:38.892195] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:38.879 [2024-11-26 20:26:39.007453] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:05:38.879 [2024-11-26 20:26:39.007527] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:38.879 [2024-11-26 20:26:39.137811] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:38.879 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:05:38.879 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:38.879 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:05:38.879 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:05:38.879 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:05:38.879 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:38.879 00:05:38.879 real 0m0.653s 00:05:38.879 user 0m0.465s 00:05:38.879 sys 0m0.160s 00:05:38.879 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.879 ************************************ 00:05:38.879 END TEST dd_bs_lt_native_bs 00:05:38.879 ************************************ 00:05:38.879 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:05:39.136 20:26:39 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:05:39.136 20:26:39 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:39.136 20:26:39 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.136 20:26:39 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:39.136 ************************************ 00:05:39.136 START TEST dd_rw 00:05:39.136 ************************************ 00:05:39.136 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:05:39.136 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:05:39.136 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:05:39.136 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:05:39.136 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:05:39.136 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:39.136 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:39.136 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:39.136 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:39.136 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:39.136 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:39.136 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:39.136 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:39.136 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:39.136 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:39.136 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:39.136 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:39.136 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:39.136 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:39.702 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:05:39.702 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:39.702 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:39.702 20:26:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:39.702 [2024-11-26 20:26:39.891941] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:39.702 [2024-11-26 20:26:39.892025] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59677 ] 00:05:39.702 { 00:05:39.702 "subsystems": [ 00:05:39.702 { 00:05:39.702 "subsystem": "bdev", 00:05:39.702 "config": [ 00:05:39.702 { 00:05:39.702 "params": { 00:05:39.702 "trtype": "pcie", 00:05:39.702 "traddr": "0000:00:10.0", 00:05:39.702 "name": "Nvme0" 00:05:39.702 }, 00:05:39.702 "method": "bdev_nvme_attach_controller" 00:05:39.702 }, 00:05:39.702 { 00:05:39.702 "method": "bdev_wait_for_examine" 00:05:39.702 } 00:05:39.702 ] 00:05:39.702 } 00:05:39.702 ] 00:05:39.702 } 00:05:39.702 [2024-11-26 20:26:40.035499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.960 [2024-11-26 20:26:40.094985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.960 [2024-11-26 20:26:40.148503] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:39.960  [2024-11-26T20:26:40.573Z] Copying: 60/60 [kB] (average 29 MBps) 00:05:40.218 00:05:40.218 20:26:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:05:40.218 20:26:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:40.218 20:26:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:40.218 20:26:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:40.218 [2024-11-26 20:26:40.501898] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:40.218 [2024-11-26 20:26:40.501993] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59696 ] 00:05:40.218 { 00:05:40.218 "subsystems": [ 00:05:40.218 { 00:05:40.218 "subsystem": "bdev", 00:05:40.218 "config": [ 00:05:40.218 { 00:05:40.218 "params": { 00:05:40.218 "trtype": "pcie", 00:05:40.218 "traddr": "0000:00:10.0", 00:05:40.218 "name": "Nvme0" 00:05:40.218 }, 00:05:40.218 "method": "bdev_nvme_attach_controller" 00:05:40.218 }, 00:05:40.218 { 00:05:40.218 "method": "bdev_wait_for_examine" 00:05:40.218 } 00:05:40.218 ] 00:05:40.218 } 00:05:40.218 ] 00:05:40.218 } 00:05:40.476 [2024-11-26 20:26:40.647377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.476 [2024-11-26 20:26:40.709175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.476 [2024-11-26 20:26:40.763948] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:40.743  [2024-11-26T20:26:41.098Z] Copying: 60/60 [kB] (average 14 MBps) 00:05:40.743 00:05:40.743 20:26:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:40.743 20:26:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:40.743 20:26:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:40.743 20:26:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:40.743 20:26:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:40.743 20:26:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:40.743 20:26:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:40.743 20:26:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:40.743 20:26:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:40.743 20:26:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:40.743 20:26:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:41.022 [2024-11-26 20:26:41.125846] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:41.022 [2024-11-26 20:26:41.125958] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59706 ] 00:05:41.022 { 00:05:41.022 "subsystems": [ 00:05:41.022 { 00:05:41.022 "subsystem": "bdev", 00:05:41.022 "config": [ 00:05:41.022 { 00:05:41.022 "params": { 00:05:41.022 "trtype": "pcie", 00:05:41.022 "traddr": "0000:00:10.0", 00:05:41.022 "name": "Nvme0" 00:05:41.022 }, 00:05:41.022 "method": "bdev_nvme_attach_controller" 00:05:41.022 }, 00:05:41.022 { 00:05:41.022 "method": "bdev_wait_for_examine" 00:05:41.022 } 00:05:41.022 ] 00:05:41.022 } 00:05:41.022 ] 00:05:41.022 } 00:05:41.022 [2024-11-26 20:26:41.268259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.022 [2024-11-26 20:26:41.350149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.282 [2024-11-26 20:26:41.408342] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.282  [2024-11-26T20:26:41.896Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:41.541 00:05:41.541 20:26:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:41.541 20:26:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:41.541 20:26:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:41.541 20:26:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:41.541 20:26:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:41.541 20:26:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:41.541 20:26:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:42.107 20:26:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:05:42.107 20:26:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:42.107 20:26:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:42.107 20:26:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:42.107 [2024-11-26 20:26:42.349850] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:42.107 [2024-11-26 20:26:42.349945] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59725 ] 00:05:42.107 { 00:05:42.107 "subsystems": [ 00:05:42.107 { 00:05:42.107 "subsystem": "bdev", 00:05:42.107 "config": [ 00:05:42.107 { 00:05:42.107 "params": { 00:05:42.107 "trtype": "pcie", 00:05:42.107 "traddr": "0000:00:10.0", 00:05:42.107 "name": "Nvme0" 00:05:42.107 }, 00:05:42.107 "method": "bdev_nvme_attach_controller" 00:05:42.107 }, 00:05:42.107 { 00:05:42.107 "method": "bdev_wait_for_examine" 00:05:42.107 } 00:05:42.107 ] 00:05:42.107 } 00:05:42.107 ] 00:05:42.107 } 00:05:42.366 [2024-11-26 20:26:42.493465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.366 [2024-11-26 20:26:42.548830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.366 [2024-11-26 20:26:42.605947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:42.624  [2024-11-26T20:26:42.979Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:42.624 00:05:42.624 20:26:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:05:42.624 20:26:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:42.624 20:26:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:42.624 20:26:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:42.882 [2024-11-26 20:26:42.984182] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:42.882 [2024-11-26 20:26:42.984288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59744 ] 00:05:42.882 { 00:05:42.882 "subsystems": [ 00:05:42.882 { 00:05:42.883 "subsystem": "bdev", 00:05:42.883 "config": [ 00:05:42.883 { 00:05:42.883 "params": { 00:05:42.883 "trtype": "pcie", 00:05:42.883 "traddr": "0000:00:10.0", 00:05:42.883 "name": "Nvme0" 00:05:42.883 }, 00:05:42.883 "method": "bdev_nvme_attach_controller" 00:05:42.883 }, 00:05:42.883 { 00:05:42.883 "method": "bdev_wait_for_examine" 00:05:42.883 } 00:05:42.883 ] 00:05:42.883 } 00:05:42.883 ] 00:05:42.883 } 00:05:42.883 [2024-11-26 20:26:43.131967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.883 [2024-11-26 20:26:43.186969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.141 [2024-11-26 20:26:43.243969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:43.141  [2024-11-26T20:26:43.755Z] Copying: 60/60 [kB] (average 29 MBps) 00:05:43.400 00:05:43.400 20:26:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:43.400 20:26:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:43.400 20:26:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:43.400 20:26:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:43.400 20:26:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:43.400 20:26:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:43.400 20:26:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:43.400 20:26:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:43.400 20:26:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:43.400 20:26:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:43.400 20:26:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:43.400 [2024-11-26 20:26:43.606946] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:43.400 [2024-11-26 20:26:43.607048] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59765 ] 00:05:43.400 { 00:05:43.400 "subsystems": [ 00:05:43.400 { 00:05:43.400 "subsystem": "bdev", 00:05:43.400 "config": [ 00:05:43.400 { 00:05:43.400 "params": { 00:05:43.400 "trtype": "pcie", 00:05:43.400 "traddr": "0000:00:10.0", 00:05:43.400 "name": "Nvme0" 00:05:43.400 }, 00:05:43.400 "method": "bdev_nvme_attach_controller" 00:05:43.400 }, 00:05:43.400 { 00:05:43.400 "method": "bdev_wait_for_examine" 00:05:43.400 } 00:05:43.400 ] 00:05:43.400 } 00:05:43.400 ] 00:05:43.400 } 00:05:43.658 [2024-11-26 20:26:43.756046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.658 [2024-11-26 20:26:43.817396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.658 [2024-11-26 20:26:43.874096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:43.658  [2024-11-26T20:26:44.271Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:43.916 00:05:43.916 20:26:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:43.916 20:26:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:43.916 20:26:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:43.916 20:26:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:43.916 20:26:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:43.916 20:26:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:43.916 20:26:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:43.916 20:26:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:44.482 20:26:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:05:44.482 20:26:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:44.482 20:26:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:44.482 20:26:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:44.482 { 00:05:44.482 "subsystems": [ 00:05:44.482 { 00:05:44.482 "subsystem": "bdev", 00:05:44.482 "config": [ 00:05:44.482 { 00:05:44.482 "params": { 00:05:44.482 "trtype": "pcie", 00:05:44.482 "traddr": "0000:00:10.0", 00:05:44.482 "name": "Nvme0" 00:05:44.482 }, 00:05:44.482 "method": "bdev_nvme_attach_controller" 00:05:44.482 }, 00:05:44.482 { 00:05:44.482 "method": "bdev_wait_for_examine" 00:05:44.482 } 00:05:44.482 ] 00:05:44.482 } 00:05:44.482 ] 00:05:44.482 } 00:05:44.482 [2024-11-26 20:26:44.788708] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:44.482 [2024-11-26 20:26:44.788866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59784 ] 00:05:44.782 [2024-11-26 20:26:44.938037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.782 [2024-11-26 20:26:44.993749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.782 [2024-11-26 20:26:45.050911] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.059  [2024-11-26T20:26:45.414Z] Copying: 56/56 [kB] (average 27 MBps) 00:05:45.059 00:05:45.059 20:26:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:05:45.059 20:26:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:45.059 20:26:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:45.059 20:26:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:45.317 { 00:05:45.317 "subsystems": [ 00:05:45.317 { 00:05:45.317 "subsystem": "bdev", 00:05:45.317 "config": [ 00:05:45.317 { 00:05:45.317 "params": { 00:05:45.317 "trtype": "pcie", 00:05:45.317 "traddr": "0000:00:10.0", 00:05:45.317 "name": "Nvme0" 00:05:45.317 }, 00:05:45.317 "method": "bdev_nvme_attach_controller" 00:05:45.317 }, 00:05:45.317 { 00:05:45.317 "method": "bdev_wait_for_examine" 00:05:45.317 } 00:05:45.317 ] 00:05:45.317 } 00:05:45.317 ] 00:05:45.317 } 00:05:45.317 [2024-11-26 20:26:45.431667] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:45.317 [2024-11-26 20:26:45.431830] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59792 ] 00:05:45.317 [2024-11-26 20:26:45.580528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.317 [2024-11-26 20:26:45.642933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.575 [2024-11-26 20:26:45.696810] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.575  [2024-11-26T20:26:46.188Z] Copying: 56/56 [kB] (average 27 MBps) 00:05:45.833 00:05:45.833 20:26:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:45.833 20:26:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:45.833 20:26:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:45.833 20:26:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:45.833 20:26:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:45.833 20:26:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:45.833 20:26:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:45.833 20:26:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:45.833 20:26:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:45.833 20:26:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:45.833 20:26:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:45.833 { 00:05:45.833 "subsystems": [ 00:05:45.833 { 00:05:45.833 "subsystem": "bdev", 00:05:45.833 "config": [ 00:05:45.833 { 00:05:45.833 "params": { 00:05:45.833 "trtype": "pcie", 00:05:45.833 "traddr": "0000:00:10.0", 00:05:45.833 "name": "Nvme0" 00:05:45.833 }, 00:05:45.833 "method": "bdev_nvme_attach_controller" 00:05:45.833 }, 00:05:45.833 { 00:05:45.833 "method": "bdev_wait_for_examine" 00:05:45.833 } 00:05:45.833 ] 00:05:45.833 } 00:05:45.833 ] 00:05:45.833 } 00:05:45.833 [2024-11-26 20:26:46.069662] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:45.833 [2024-11-26 20:26:46.069831] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59813 ] 00:05:46.091 [2024-11-26 20:26:46.228114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.091 [2024-11-26 20:26:46.294042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.091 [2024-11-26 20:26:46.352326] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:46.349  [2024-11-26T20:26:46.704Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:46.349 00:05:46.349 20:26:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:46.349 20:26:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:46.349 20:26:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:46.349 20:26:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:46.349 20:26:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:46.349 20:26:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:46.349 20:26:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:46.915 20:26:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:05:46.915 20:26:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:46.915 20:26:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:46.915 20:26:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:46.915 [2024-11-26 20:26:47.254340] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:46.915 [2024-11-26 20:26:47.254441] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59832 ] 00:05:46.915 { 00:05:46.915 "subsystems": [ 00:05:46.915 { 00:05:46.915 "subsystem": "bdev", 00:05:46.915 "config": [ 00:05:46.915 { 00:05:46.915 "params": { 00:05:46.915 "trtype": "pcie", 00:05:46.915 "traddr": "0000:00:10.0", 00:05:46.915 "name": "Nvme0" 00:05:46.915 }, 00:05:46.915 "method": "bdev_nvme_attach_controller" 00:05:46.915 }, 00:05:46.915 { 00:05:46.915 "method": "bdev_wait_for_examine" 00:05:46.915 } 00:05:46.915 ] 00:05:46.915 } 00:05:46.915 ] 00:05:46.915 } 00:05:47.172 [2024-11-26 20:26:47.395925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.172 [2024-11-26 20:26:47.475216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.431 [2024-11-26 20:26:47.534424] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.431  [2024-11-26T20:26:48.044Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:47.689 00:05:47.689 20:26:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:05:47.689 20:26:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:47.689 20:26:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:47.689 20:26:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:47.689 { 00:05:47.689 "subsystems": [ 00:05:47.689 { 00:05:47.689 "subsystem": "bdev", 00:05:47.689 "config": [ 00:05:47.689 { 00:05:47.689 "params": { 00:05:47.689 "trtype": "pcie", 00:05:47.689 "traddr": "0000:00:10.0", 00:05:47.689 "name": "Nvme0" 00:05:47.689 }, 00:05:47.689 "method": "bdev_nvme_attach_controller" 00:05:47.689 }, 00:05:47.689 { 00:05:47.689 "method": "bdev_wait_for_examine" 00:05:47.689 } 00:05:47.689 ] 00:05:47.689 } 00:05:47.689 ] 00:05:47.689 } 00:05:47.689 [2024-11-26 20:26:47.902737] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:47.689 [2024-11-26 20:26:47.902829] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59851 ] 00:05:47.947 [2024-11-26 20:26:48.049340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.947 [2024-11-26 20:26:48.111751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.947 [2024-11-26 20:26:48.165445] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.947  [2024-11-26T20:26:48.560Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:48.205 00:05:48.205 20:26:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:48.205 20:26:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:48.205 20:26:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:48.205 20:26:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:48.205 20:26:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:48.205 20:26:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:48.205 20:26:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:48.205 20:26:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:48.205 20:26:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:48.205 20:26:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:48.205 20:26:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:48.205 [2024-11-26 20:26:48.531936] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:48.205 [2024-11-26 20:26:48.532051] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59861 ] 00:05:48.205 { 00:05:48.205 "subsystems": [ 00:05:48.205 { 00:05:48.205 "subsystem": "bdev", 00:05:48.205 "config": [ 00:05:48.205 { 00:05:48.205 "params": { 00:05:48.205 "trtype": "pcie", 00:05:48.205 "traddr": "0000:00:10.0", 00:05:48.205 "name": "Nvme0" 00:05:48.205 }, 00:05:48.205 "method": "bdev_nvme_attach_controller" 00:05:48.205 }, 00:05:48.205 { 00:05:48.205 "method": "bdev_wait_for_examine" 00:05:48.205 } 00:05:48.205 ] 00:05:48.205 } 00:05:48.205 ] 00:05:48.205 } 00:05:48.464 [2024-11-26 20:26:48.682509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.464 [2024-11-26 20:26:48.753436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.464 [2024-11-26 20:26:48.810465] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.721  [2024-11-26T20:26:49.335Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:48.980 00:05:48.980 20:26:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:48.980 20:26:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:48.980 20:26:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:48.980 20:26:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:48.980 20:26:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:48.980 20:26:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:48.980 20:26:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:48.980 20:26:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:49.239 20:26:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:05:49.239 20:26:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:49.239 20:26:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:49.239 20:26:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:49.499 [2024-11-26 20:26:49.628880] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:49.499 [2024-11-26 20:26:49.628964] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59880 ] 00:05:49.499 { 00:05:49.499 "subsystems": [ 00:05:49.499 { 00:05:49.499 "subsystem": "bdev", 00:05:49.499 "config": [ 00:05:49.499 { 00:05:49.499 "params": { 00:05:49.499 "trtype": "pcie", 00:05:49.499 "traddr": "0000:00:10.0", 00:05:49.499 "name": "Nvme0" 00:05:49.499 }, 00:05:49.499 "method": "bdev_nvme_attach_controller" 00:05:49.499 }, 00:05:49.499 { 00:05:49.499 "method": "bdev_wait_for_examine" 00:05:49.499 } 00:05:49.499 ] 00:05:49.499 } 00:05:49.499 ] 00:05:49.499 } 00:05:49.499 [2024-11-26 20:26:49.771903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.499 [2024-11-26 20:26:49.844083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.758 [2024-11-26 20:26:49.904607] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:49.758  [2024-11-26T20:26:50.371Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:50.016 00:05:50.016 20:26:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:05:50.016 20:26:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:50.016 20:26:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:50.016 20:26:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:50.016 [2024-11-26 20:26:50.256885] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:50.016 [2024-11-26 20:26:50.256977] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59899 ] 00:05:50.016 { 00:05:50.016 "subsystems": [ 00:05:50.016 { 00:05:50.016 "subsystem": "bdev", 00:05:50.016 "config": [ 00:05:50.016 { 00:05:50.016 "params": { 00:05:50.016 "trtype": "pcie", 00:05:50.016 "traddr": "0000:00:10.0", 00:05:50.016 "name": "Nvme0" 00:05:50.016 }, 00:05:50.016 "method": "bdev_nvme_attach_controller" 00:05:50.016 }, 00:05:50.016 { 00:05:50.016 "method": "bdev_wait_for_examine" 00:05:50.016 } 00:05:50.016 ] 00:05:50.016 } 00:05:50.016 ] 00:05:50.016 } 00:05:50.274 [2024-11-26 20:26:50.398477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.274 [2024-11-26 20:26:50.461726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.274 [2024-11-26 20:26:50.515921] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.554  [2024-11-26T20:26:50.909Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:50.554 00:05:50.554 20:26:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:50.554 20:26:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:50.554 20:26:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:50.554 20:26:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:50.554 20:26:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:50.554 20:26:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:50.554 20:26:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:50.554 20:26:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:50.554 20:26:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:50.554 20:26:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:50.554 20:26:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:50.554 [2024-11-26 20:26:50.877684] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:50.554 [2024-11-26 20:26:50.877781] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59920 ] 00:05:50.554 { 00:05:50.554 "subsystems": [ 00:05:50.554 { 00:05:50.554 "subsystem": "bdev", 00:05:50.554 "config": [ 00:05:50.554 { 00:05:50.554 "params": { 00:05:50.554 "trtype": "pcie", 00:05:50.554 "traddr": "0000:00:10.0", 00:05:50.554 "name": "Nvme0" 00:05:50.554 }, 00:05:50.554 "method": "bdev_nvme_attach_controller" 00:05:50.554 }, 00:05:50.554 { 00:05:50.554 "method": "bdev_wait_for_examine" 00:05:50.554 } 00:05:50.554 ] 00:05:50.554 } 00:05:50.554 ] 00:05:50.554 } 00:05:50.814 [2024-11-26 20:26:51.023952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.814 [2024-11-26 20:26:51.093421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.814 [2024-11-26 20:26:51.151154] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:51.073  [2024-11-26T20:26:51.689Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:51.334 00:05:51.334 20:26:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:51.334 20:26:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:51.334 20:26:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:51.334 20:26:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:51.334 20:26:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:51.334 20:26:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:51.334 20:26:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:51.593 20:26:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:05:51.593 20:26:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:51.593 20:26:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:51.593 20:26:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:51.851 { 00:05:51.851 "subsystems": [ 00:05:51.851 { 00:05:51.851 "subsystem": "bdev", 00:05:51.851 "config": [ 00:05:51.851 { 00:05:51.851 "params": { 00:05:51.851 "trtype": "pcie", 00:05:51.851 "traddr": "0000:00:10.0", 00:05:51.851 "name": "Nvme0" 00:05:51.851 }, 00:05:51.851 "method": "bdev_nvme_attach_controller" 00:05:51.851 }, 00:05:51.851 { 00:05:51.851 "method": "bdev_wait_for_examine" 00:05:51.851 } 00:05:51.851 ] 00:05:51.851 } 00:05:51.851 ] 00:05:51.851 } 00:05:51.851 [2024-11-26 20:26:51.979686] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:51.851 [2024-11-26 20:26:51.979802] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59939 ] 00:05:51.851 [2024-11-26 20:26:52.129684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.851 [2024-11-26 20:26:52.203421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.127 [2024-11-26 20:26:52.260689] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.127  [2024-11-26T20:26:52.739Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:52.384 00:05:52.384 20:26:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:05:52.384 20:26:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:52.384 20:26:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:52.385 20:26:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:52.385 [2024-11-26 20:26:52.638268] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:52.385 [2024-11-26 20:26:52.638399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59949 ] 00:05:52.385 { 00:05:52.385 "subsystems": [ 00:05:52.385 { 00:05:52.385 "subsystem": "bdev", 00:05:52.385 "config": [ 00:05:52.385 { 00:05:52.385 "params": { 00:05:52.385 "trtype": "pcie", 00:05:52.385 "traddr": "0000:00:10.0", 00:05:52.385 "name": "Nvme0" 00:05:52.385 }, 00:05:52.385 "method": "bdev_nvme_attach_controller" 00:05:52.385 }, 00:05:52.385 { 00:05:52.385 "method": "bdev_wait_for_examine" 00:05:52.385 } 00:05:52.385 ] 00:05:52.385 } 00:05:52.385 ] 00:05:52.385 } 00:05:52.642 [2024-11-26 20:26:52.786378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.642 [2024-11-26 20:26:52.855788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.642 [2024-11-26 20:26:52.914843] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.900  [2024-11-26T20:26:53.255Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:52.900 00:05:52.900 20:26:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:52.900 20:26:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:52.900 20:26:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:52.900 20:26:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:52.900 20:26:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:52.900 20:26:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:52.900 20:26:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:52.900 20:26:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:52.900 20:26:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:52.900 20:26:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:52.900 20:26:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:53.157 [2024-11-26 20:26:53.304429] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:53.157 [2024-11-26 20:26:53.304573] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59970 ] 00:05:53.157 { 00:05:53.157 "subsystems": [ 00:05:53.157 { 00:05:53.157 "subsystem": "bdev", 00:05:53.157 "config": [ 00:05:53.157 { 00:05:53.157 "params": { 00:05:53.157 "trtype": "pcie", 00:05:53.157 "traddr": "0000:00:10.0", 00:05:53.157 "name": "Nvme0" 00:05:53.157 }, 00:05:53.157 "method": "bdev_nvme_attach_controller" 00:05:53.157 }, 00:05:53.157 { 00:05:53.157 "method": "bdev_wait_for_examine" 00:05:53.157 } 00:05:53.157 ] 00:05:53.157 } 00:05:53.157 ] 00:05:53.157 } 00:05:53.157 [2024-11-26 20:26:53.470942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.414 [2024-11-26 20:26:53.544000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.414 [2024-11-26 20:26:53.598963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.414  [2024-11-26T20:26:54.028Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:53.673 00:05:53.673 00:05:53.673 real 0m14.673s 00:05:53.673 user 0m10.753s 00:05:53.673 sys 0m5.512s 00:05:53.673 20:26:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.673 20:26:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:53.673 ************************************ 00:05:53.673 END TEST dd_rw 00:05:53.673 ************************************ 00:05:53.673 20:26:53 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:05:53.673 20:26:53 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.673 20:26:53 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.673 20:26:53 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:53.673 ************************************ 00:05:53.673 START TEST dd_rw_offset 00:05:53.673 ************************************ 00:05:53.673 20:26:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:05:53.673 20:26:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:05:53.673 20:26:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:05:53.673 20:26:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:05:53.673 20:26:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:53.673 20:26:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:05:53.673 20:26:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=zao2j98cq4bw2xss0cf7z3pj20o9x3qb8t7hew9bdz27y1wuubz0x4pencj1wxkggkkkvknegjvavctio827um8hdaa1hkseeqfa4nn1qk1ki2a8eh2c0cme0dvo53yhx5j810mvyzav1hy2hstvr9uio6s7nwl975zxijt1i7fnr43wzg0ma2r5938968n7dytrb4r56dn7jl3zjp6f29j4vpk3ixltz0zn786dh421epo4k59gp9nh9qf3mf8rt7wbq3c3xj08jp5olredyymqmc41jbhi4eh3ypofg27nysohwinmbzhdauxt8htdnuutdhtl3ag4a5sxq5y8824epszchefvq3i3zg2jxvswvp3w42rqlhrgjgalpingigu3n69j5dexbfhofel7f1h096noly9cuj2aq5xlqs5w6q8dyxj9gs67bdawxgvkryhxwi43l7zs0ulq0db2oqi0vh1nip4hjuzbdkyjuw77m0srqlf8g7hj9sfewca2znyh733fxff7ja79o20joa47dq6fehsr5xwdl0lfp9b0wxuzir6mi74uai60dl0dxi8o0f8z2g782bhnw6u2cu6k9tmknl4ah85zpakdrjjdp56l5iwoh7iov3j201xmyv8xkv2tsfuy6xgugnd25lf4z4ium45wostkl2hrpraxkh3c8f4w6brd4tcryqxw4a6fmz7s6b4wdgmqvqsvbzdb8eqa0tyaaed1s7j6rnmxh775ex7kj17cl2mb9vrjwq01tnx8f63hvdcb3m1sy6ruxyhh0chzo2b76yzt0ceeix5vfq738j1ocsxx947gh9xn1usocemtwl0laeqmuh10h9r11ih2k3vs1gpxnidcgrslphz28s7jjr70vq8qns59n9c65r2drivyrlgwnidjp32460hg0d9hcoc5ot4g8t8i8trarkxl4004xsk3afg0hvmo5b8xr8vdwrjheeur9u5186tm77mf5ioukwczopmv43er27cgwnzkb74ucdlkr1p4cbmjna6pq7b0z69qbw5uhmni4p1x93c1aw3pf6w3ibq2vpzxuodjn7b2m3dfkxfcip50zpcz494xy56gd96v460wb6ojfb8dih20w0rn1d3mvff584sce8l37yq4u5cwvlpkf8c56ue3slcqs9y7xthrtmyk7udh77dd9xpgl29pzz19x5u4jesfia59dh1t871mtv1ym66423a0638jgcyh1udolfevjgz9620okxwfnyldls8sr3us59vd0rgn2144yef7lc8by3hnxuunpvvhkxtusxb79jndniaa2b1rdxfw64pye2tpdgfgg7i88s8b4afza49tfxcvyrrsjwqs0q2chf05ucsf2h9y5i62d4gl5sqq0kwr5246vcxxv8rie02dkr6gpsi16hwlzutparroau1f5rbaac1eoqwmg5bmn0tpsvixq1x4ey35x4lo7s1qe5zbkjcz4thfi8n37rhlcv1hw00bd4igqm95viqpu2y2slcbbuigvn6tvcyke5z3o1a67f13ujr6faocpv2n95efq9htzy0jh6p8yx5gzhhwduf5vw40l6c4rz35glgxqb83owdls7eyn8l20qe7l5yrw96l987q3286dawhq7xx5hmcv5s883epedk8skm8mf3patgbuxqbe6xjdkdhidjfrdwgekqoeifbuhdug2lp7a3yfxbcii0vcll7ypkrvd3ec15x9ysstpsi9f9rm6aaiwljxq4i8uoo75vldp01v2o3v42cq3st1xyab13s0e9ho5mqc70gmhgfycug34cqun8vsnc390erz8u0adbl231xeoq8f2mrjuniqpur3jxm634hf3jf8iar5ziq2luz1w92iu4g7whyzl8t0f7i0h12xup8324ev8bhsu4ku8ldybc2goq5k2tqizgkifcz72vdni7ng94n0isrphq9bcn60rioh25l9rmatutmp51p36kvi5ipi2kyvklextk0hpub4o47xwyq0y3jap53s3opsj1vjvpyhjuuo9634lakb5r3t9orbx8zr9uopfdmuej2dqj3o6z9tum3lubsjun91n2r8xgonhsaukyinph364q7k7ae0ilcjjdwbxfpczi3jhn5onyv2lar9bckrv8698kfp7bro4psab6zng774v2dyhis562mfv3ync5ns8ckwx3080j760w2p2gaqa3hp1g9ybevzg3obxk1zb6dnt1lske5p4jhyc7xdt8vc5i2q7h34l0n6fv0d37ue1amrs9btpn308mkuz5q45g8uj5novzhoskp25iz4fe562bp31hdw8fbghavk349n3vdgr2zfdv0qlg5z96oimwcsvlpbhl0l4e2pgls78yz7mq0ndtmjxzcvcyt76c4lwafd1pqrr9tvt9rk646ab84p1yg6whwmg078bvqwnf8c7mc35e2pa3sr14wkzycy42g1agkcz83pdpixpiu9w9pqq6cgl6mo2p70ts19zccnkwn4fgtjwowu71ro51u0po6uc54ryjd9sfwk4p5d70a6lbg5fp1hzwnng2y91uwobb5zdgpcwlv7cnen7rz9wr61oz7jkq5ou0y0omt5imvylhpi217yb50t04ufziuinjxpj3b8xgm59zhjjcee8whgdk22h2wl0o9lxi13o3o5fs2o1uzmb6h5mc341rsyyv1kr8t88g7vpuf14ii4c5cynrrj8im4gofcwb5x9vzhyv0tdex5aotc7oq3pdq6nuewgora3uj4ubhyqy4jpch04f4gc4t3eeuouj6e1tjc69r54i0fps93ah59xivvdt0hhlc8klnd56kbpsh8r05k3dheu6sqmtg9hd2yiptc59ckn26qb560rv5i01d87ufdkxf3mxvat00cplybx8ubge46slwh5yomz7wodu89saa5yntrtt29mvtij15knokminshb40cyyyia8boninro417suy74tne4krk6kgjkyyetm0v4gmlx6wixna47dy1dv8qas5wptu6odke2xpa1n5o458tg67f6uwwvtzki2f7senigy2iv0ufp1yel7w00aurchonqors3l4qjva3jpw88jw1o189ub5pqdxezogk6y12tj5x24bhk72p7kuxed1191mtt4ur83hzss8mhmiwkysj8c827w7x2dspe905yrsip8mwncot32e0wwr1jog5z1zfzmjuqok5nhb827jp580iixaixe5k82fb136g6a048xq24ye1obsv228ztd5noxbjb3bakbnlc53abk0ozvpjnaz4mkjmzavi7nmf0v6v6aieh53xvax8q2ocn0clcc3syigt8rg0dzrmfxzubp0uzx23dpgo2r0kz5x5vd4uicbc2jpvtmmmjvf8n03m2h8real827hhf19oa72pk9u5k9m6inqoxaegtqynb3esbbwldkalpvqgzh4axjtajgtyy694rc9amzxg6iib3jnlucepd3b4mkkhbuui0vqq40zbem4tzqijnz4jasayyzuhfmxdac880wfsfd47l5c6cdlpa5mtrjl2g762ntksjvyp6gruakpew5ep8481pc9rexd0mufbpt3aj3ubm7ta80kk1fzjuh9mx2j6p12nqiiqoco9v1zi04xmsdla8bty9p1ewvwxmsfzsxazsqxf7cp7gv6cxvmeh89uykevkx9595n0ry15l8f3cc3wwbgpll2tr378mrfxrcafftjl73xdyv3pd990n4r50748vqetzn1jxqdvjttww01umq751alkbkhmgyuj13jtytpvwqyt8c5uaoa6iyqdf9az6uzuhncq0dzt7bqgx7ix1pv6r5sv751goul6wtedjtkkjd80h2r4bp5z9kkp04v2pwqlmdepp6a50v1ghegf1kbhbusp08fy5h9o724kldrwhhfsf8vyw06xu1vl1y2tmcu79xly4zs7c9iwoe0c1qkqycwxqm06z5zvh696dy8u8rtmzar26fqnpmufof40a10anttpgyau346rojzjtt357chc995btj9 00:05:53.673 20:26:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:05:53.673 20:26:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:05:53.673 20:26:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:53.673 20:26:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:53.932 [2024-11-26 20:26:54.069002] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:53.932 [2024-11-26 20:26:54.069108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60006 ] 00:05:53.932 { 00:05:53.932 "subsystems": [ 00:05:53.932 { 00:05:53.932 "subsystem": "bdev", 00:05:53.932 "config": [ 00:05:53.932 { 00:05:53.932 "params": { 00:05:53.932 "trtype": "pcie", 00:05:53.932 "traddr": "0000:00:10.0", 00:05:53.932 "name": "Nvme0" 00:05:53.932 }, 00:05:53.932 "method": "bdev_nvme_attach_controller" 00:05:53.932 }, 00:05:53.932 { 00:05:53.932 "method": "bdev_wait_for_examine" 00:05:53.932 } 00:05:53.932 ] 00:05:53.932 } 00:05:53.932 ] 00:05:53.932 } 00:05:53.932 [2024-11-26 20:26:54.216507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.932 [2024-11-26 20:26:54.279266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.190 [2024-11-26 20:26:54.334060] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:54.190  [2024-11-26T20:26:54.802Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:05:54.447 00:05:54.447 20:26:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:05:54.448 20:26:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:05:54.448 20:26:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:54.448 20:26:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:54.448 { 00:05:54.448 "subsystems": [ 00:05:54.448 { 00:05:54.448 "subsystem": "bdev", 00:05:54.448 "config": [ 00:05:54.448 { 00:05:54.448 "params": { 00:05:54.448 "trtype": "pcie", 00:05:54.448 "traddr": "0000:00:10.0", 00:05:54.448 "name": "Nvme0" 00:05:54.448 }, 00:05:54.448 "method": "bdev_nvme_attach_controller" 00:05:54.448 }, 00:05:54.448 { 00:05:54.448 "method": "bdev_wait_for_examine" 00:05:54.448 } 00:05:54.448 ] 00:05:54.448 } 00:05:54.448 ] 00:05:54.448 } 00:05:54.448 [2024-11-26 20:26:54.725446] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:54.448 [2024-11-26 20:26:54.725543] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60014 ] 00:05:54.706 [2024-11-26 20:26:54.889032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.706 [2024-11-26 20:26:54.960261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.706 [2024-11-26 20:26:55.017718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:54.964  [2024-11-26T20:26:55.578Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:05:55.223 00:05:55.223 20:26:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:05:55.223 ************************************ 00:05:55.223 END TEST dd_rw_offset 00:05:55.224 20:26:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ zao2j98cq4bw2xss0cf7z3pj20o9x3qb8t7hew9bdz27y1wuubz0x4pencj1wxkggkkkvknegjvavctio827um8hdaa1hkseeqfa4nn1qk1ki2a8eh2c0cme0dvo53yhx5j810mvyzav1hy2hstvr9uio6s7nwl975zxijt1i7fnr43wzg0ma2r5938968n7dytrb4r56dn7jl3zjp6f29j4vpk3ixltz0zn786dh421epo4k59gp9nh9qf3mf8rt7wbq3c3xj08jp5olredyymqmc41jbhi4eh3ypofg27nysohwinmbzhdauxt8htdnuutdhtl3ag4a5sxq5y8824epszchefvq3i3zg2jxvswvp3w42rqlhrgjgalpingigu3n69j5dexbfhofel7f1h096noly9cuj2aq5xlqs5w6q8dyxj9gs67bdawxgvkryhxwi43l7zs0ulq0db2oqi0vh1nip4hjuzbdkyjuw77m0srqlf8g7hj9sfewca2znyh733fxff7ja79o20joa47dq6fehsr5xwdl0lfp9b0wxuzir6mi74uai60dl0dxi8o0f8z2g782bhnw6u2cu6k9tmknl4ah85zpakdrjjdp56l5iwoh7iov3j201xmyv8xkv2tsfuy6xgugnd25lf4z4ium45wostkl2hrpraxkh3c8f4w6brd4tcryqxw4a6fmz7s6b4wdgmqvqsvbzdb8eqa0tyaaed1s7j6rnmxh775ex7kj17cl2mb9vrjwq01tnx8f63hvdcb3m1sy6ruxyhh0chzo2b76yzt0ceeix5vfq738j1ocsxx947gh9xn1usocemtwl0laeqmuh10h9r11ih2k3vs1gpxnidcgrslphz28s7jjr70vq8qns59n9c65r2drivyrlgwnidjp32460hg0d9hcoc5ot4g8t8i8trarkxl4004xsk3afg0hvmo5b8xr8vdwrjheeur9u5186tm77mf5ioukwczopmv43er27cgwnzkb74ucdlkr1p4cbmjna6pq7b0z69qbw5uhmni4p1x93c1aw3pf6w3ibq2vpzxuodjn7b2m3dfkxfcip50zpcz494xy56gd96v460wb6ojfb8dih20w0rn1d3mvff584sce8l37yq4u5cwvlpkf8c56ue3slcqs9y7xthrtmyk7udh77dd9xpgl29pzz19x5u4jesfia59dh1t871mtv1ym66423a0638jgcyh1udolfevjgz9620okxwfnyldls8sr3us59vd0rgn2144yef7lc8by3hnxuunpvvhkxtusxb79jndniaa2b1rdxfw64pye2tpdgfgg7i88s8b4afza49tfxcvyrrsjwqs0q2chf05ucsf2h9y5i62d4gl5sqq0kwr5246vcxxv8rie02dkr6gpsi16hwlzutparroau1f5rbaac1eoqwmg5bmn0tpsvixq1x4ey35x4lo7s1qe5zbkjcz4thfi8n37rhlcv1hw00bd4igqm95viqpu2y2slcbbuigvn6tvcyke5z3o1a67f13ujr6faocpv2n95efq9htzy0jh6p8yx5gzhhwduf5vw40l6c4rz35glgxqb83owdls7eyn8l20qe7l5yrw96l987q3286dawhq7xx5hmcv5s883epedk8skm8mf3patgbuxqbe6xjdkdhidjfrdwgekqoeifbuhdug2lp7a3yfxbcii0vcll7ypkrvd3ec15x9ysstpsi9f9rm6aaiwljxq4i8uoo75vldp01v2o3v42cq3st1xyab13s0e9ho5mqc70gmhgfycug34cqun8vsnc390erz8u0adbl231xeoq8f2mrjuniqpur3jxm634hf3jf8iar5ziq2luz1w92iu4g7whyzl8t0f7i0h12xup8324ev8bhsu4ku8ldybc2goq5k2tqizgkifcz72vdni7ng94n0isrphq9bcn60rioh25l9rmatutmp51p36kvi5ipi2kyvklextk0hpub4o47xwyq0y3jap53s3opsj1vjvpyhjuuo9634lakb5r3t9orbx8zr9uopfdmuej2dqj3o6z9tum3lubsjun91n2r8xgonhsaukyinph364q7k7ae0ilcjjdwbxfpczi3jhn5onyv2lar9bckrv8698kfp7bro4psab6zng774v2dyhis562mfv3ync5ns8ckwx3080j760w2p2gaqa3hp1g9ybevzg3obxk1zb6dnt1lske5p4jhyc7xdt8vc5i2q7h34l0n6fv0d37ue1amrs9btpn308mkuz5q45g8uj5novzhoskp25iz4fe562bp31hdw8fbghavk349n3vdgr2zfdv0qlg5z96oimwcsvlpbhl0l4e2pgls78yz7mq0ndtmjxzcvcyt76c4lwafd1pqrr9tvt9rk646ab84p1yg6whwmg078bvqwnf8c7mc35e2pa3sr14wkzycy42g1agkcz83pdpixpiu9w9pqq6cgl6mo2p70ts19zccnkwn4fgtjwowu71ro51u0po6uc54ryjd9sfwk4p5d70a6lbg5fp1hzwnng2y91uwobb5zdgpcwlv7cnen7rz9wr61oz7jkq5ou0y0omt5imvylhpi217yb50t04ufziuinjxpj3b8xgm59zhjjcee8whgdk22h2wl0o9lxi13o3o5fs2o1uzmb6h5mc341rsyyv1kr8t88g7vpuf14ii4c5cynrrj8im4gofcwb5x9vzhyv0tdex5aotc7oq3pdq6nuewgora3uj4ubhyqy4jpch04f4gc4t3eeuouj6e1tjc69r54i0fps93ah59xivvdt0hhlc8klnd56kbpsh8r05k3dheu6sqmtg9hd2yiptc59ckn26qb560rv5i01d87ufdkxf3mxvat00cplybx8ubge46slwh5yomz7wodu89saa5yntrtt29mvtij15knokminshb40cyyyia8boninro417suy74tne4krk6kgjkyyetm0v4gmlx6wixna47dy1dv8qas5wptu6odke2xpa1n5o458tg67f6uwwvtzki2f7senigy2iv0ufp1yel7w00aurchonqors3l4qjva3jpw88jw1o189ub5pqdxezogk6y12tj5x24bhk72p7kuxed1191mtt4ur83hzss8mhmiwkysj8c827w7x2dspe905yrsip8mwncot32e0wwr1jog5z1zfzmjuqok5nhb827jp580iixaixe5k82fb136g6a048xq24ye1obsv228ztd5noxbjb3bakbnlc53abk0ozvpjnaz4mkjmzavi7nmf0v6v6aieh53xvax8q2ocn0clcc3syigt8rg0dzrmfxzubp0uzx23dpgo2r0kz5x5vd4uicbc2jpvtmmmjvf8n03m2h8real827hhf19oa72pk9u5k9m6inqoxaegtqynb3esbbwldkalpvqgzh4axjtajgtyy694rc9amzxg6iib3jnlucepd3b4mkkhbuui0vqq40zbem4tzqijnz4jasayyzuhfmxdac880wfsfd47l5c6cdlpa5mtrjl2g762ntksjvyp6gruakpew5ep8481pc9rexd0mufbpt3aj3ubm7ta80kk1fzjuh9mx2j6p12nqiiqoco9v1zi04xmsdla8bty9p1ewvwxmsfzsxazsqxf7cp7gv6cxvmeh89uykevkx9595n0ry15l8f3cc3wwbgpll2tr378mrfxrcafftjl73xdyv3pd990n4r50748vqetzn1jxqdvjttww01umq751alkbkhmgyuj13jtytpvwqyt8c5uaoa6iyqdf9az6uzuhncq0dzt7bqgx7ix1pv6r5sv751goul6wtedjtkkjd80h2r4bp5z9kkp04v2pwqlmdepp6a50v1ghegf1kbhbusp08fy5h9o724kldrwhhfsf8vyw06xu1vl1y2tmcu79xly4zs7c9iwoe0c1qkqycwxqm06z5zvh696dy8u8rtmzar26fqnpmufof40a10anttpgyau346rojzjtt357chc995btj9 == \z\a\o\2\j\9\8\c\q\4\b\w\2\x\s\s\0\c\f\7\z\3\p\j\2\0\o\9\x\3\q\b\8\t\7\h\e\w\9\b\d\z\2\7\y\1\w\u\u\b\z\0\x\4\p\e\n\c\j\1\w\x\k\g\g\k\k\k\v\k\n\e\g\j\v\a\v\c\t\i\o\8\2\7\u\m\8\h\d\a\a\1\h\k\s\e\e\q\f\a\4\n\n\1\q\k\1\k\i\2\a\8\e\h\2\c\0\c\m\e\0\d\v\o\5\3\y\h\x\5\j\8\1\0\m\v\y\z\a\v\1\h\y\2\h\s\t\v\r\9\u\i\o\6\s\7\n\w\l\9\7\5\z\x\i\j\t\1\i\7\f\n\r\4\3\w\z\g\0\m\a\2\r\5\9\3\8\9\6\8\n\7\d\y\t\r\b\4\r\5\6\d\n\7\j\l\3\z\j\p\6\f\2\9\j\4\v\p\k\3\i\x\l\t\z\0\z\n\7\8\6\d\h\4\2\1\e\p\o\4\k\5\9\g\p\9\n\h\9\q\f\3\m\f\8\r\t\7\w\b\q\3\c\3\x\j\0\8\j\p\5\o\l\r\e\d\y\y\m\q\m\c\4\1\j\b\h\i\4\e\h\3\y\p\o\f\g\2\7\n\y\s\o\h\w\i\n\m\b\z\h\d\a\u\x\t\8\h\t\d\n\u\u\t\d\h\t\l\3\a\g\4\a\5\s\x\q\5\y\8\8\2\4\e\p\s\z\c\h\e\f\v\q\3\i\3\z\g\2\j\x\v\s\w\v\p\3\w\4\2\r\q\l\h\r\g\j\g\a\l\p\i\n\g\i\g\u\3\n\6\9\j\5\d\e\x\b\f\h\o\f\e\l\7\f\1\h\0\9\6\n\o\l\y\9\c\u\j\2\a\q\5\x\l\q\s\5\w\6\q\8\d\y\x\j\9\g\s\6\7\b\d\a\w\x\g\v\k\r\y\h\x\w\i\4\3\l\7\z\s\0\u\l\q\0\d\b\2\o\q\i\0\v\h\1\n\i\p\4\h\j\u\z\b\d\k\y\j\u\w\7\7\m\0\s\r\q\l\f\8\g\7\h\j\9\s\f\e\w\c\a\2\z\n\y\h\7\3\3\f\x\f\f\7\j\a\7\9\o\2\0\j\o\a\4\7\d\q\6\f\e\h\s\r\5\x\w\d\l\0\l\f\p\9\b\0\w\x\u\z\i\r\6\m\i\7\4\u\a\i\6\0\d\l\0\d\x\i\8\o\0\f\8\z\2\g\7\8\2\b\h\n\w\6\u\2\c\u\6\k\9\t\m\k\n\l\4\a\h\8\5\z\p\a\k\d\r\j\j\d\p\5\6\l\5\i\w\o\h\7\i\o\v\3\j\2\0\1\x\m\y\v\8\x\k\v\2\t\s\f\u\y\6\x\g\u\g\n\d\2\5\l\f\4\z\4\i\u\m\4\5\w\o\s\t\k\l\2\h\r\p\r\a\x\k\h\3\c\8\f\4\w\6\b\r\d\4\t\c\r\y\q\x\w\4\a\6\f\m\z\7\s\6\b\4\w\d\g\m\q\v\q\s\v\b\z\d\b\8\e\q\a\0\t\y\a\a\e\d\1\s\7\j\6\r\n\m\x\h\7\7\5\e\x\7\k\j\1\7\c\l\2\m\b\9\v\r\j\w\q\0\1\t\n\x\8\f\6\3\h\v\d\c\b\3\m\1\s\y\6\r\u\x\y\h\h\0\c\h\z\o\2\b\7\6\y\z\t\0\c\e\e\i\x\5\v\f\q\7\3\8\j\1\o\c\s\x\x\9\4\7\g\h\9\x\n\1\u\s\o\c\e\m\t\w\l\0\l\a\e\q\m\u\h\1\0\h\9\r\1\1\i\h\2\k\3\v\s\1\g\p\x\n\i\d\c\g\r\s\l\p\h\z\2\8\s\7\j\j\r\7\0\v\q\8\q\n\s\5\9\n\9\c\6\5\r\2\d\r\i\v\y\r\l\g\w\n\i\d\j\p\3\2\4\6\0\h\g\0\d\9\h\c\o\c\5\o\t\4\g\8\t\8\i\8\t\r\a\r\k\x\l\4\0\0\4\x\s\k\3\a\f\g\0\h\v\m\o\5\b\8\x\r\8\v\d\w\r\j\h\e\e\u\r\9\u\5\1\8\6\t\m\7\7\m\f\5\i\o\u\k\w\c\z\o\p\m\v\4\3\e\r\2\7\c\g\w\n\z\k\b\7\4\u\c\d\l\k\r\1\p\4\c\b\m\j\n\a\6\p\q\7\b\0\z\6\9\q\b\w\5\u\h\m\n\i\4\p\1\x\9\3\c\1\a\w\3\p\f\6\w\3\i\b\q\2\v\p\z\x\u\o\d\j\n\7\b\2\m\3\d\f\k\x\f\c\i\p\5\0\z\p\c\z\4\9\4\x\y\5\6\g\d\9\6\v\4\6\0\w\b\6\o\j\f\b\8\d\i\h\2\0\w\0\r\n\1\d\3\m\v\f\f\5\8\4\s\c\e\8\l\3\7\y\q\4\u\5\c\w\v\l\p\k\f\8\c\5\6\u\e\3\s\l\c\q\s\9\y\7\x\t\h\r\t\m\y\k\7\u\d\h\7\7\d\d\9\x\p\g\l\2\9\p\z\z\1\9\x\5\u\4\j\e\s\f\i\a\5\9\d\h\1\t\8\7\1\m\t\v\1\y\m\6\6\4\2\3\a\0\6\3\8\j\g\c\y\h\1\u\d\o\l\f\e\v\j\g\z\9\6\2\0\o\k\x\w\f\n\y\l\d\l\s\8\s\r\3\u\s\5\9\v\d\0\r\g\n\2\1\4\4\y\e\f\7\l\c\8\b\y\3\h\n\x\u\u\n\p\v\v\h\k\x\t\u\s\x\b\7\9\j\n\d\n\i\a\a\2\b\1\r\d\x\f\w\6\4\p\y\e\2\t\p\d\g\f\g\g\7\i\8\8\s\8\b\4\a\f\z\a\4\9\t\f\x\c\v\y\r\r\s\j\w\q\s\0\q\2\c\h\f\0\5\u\c\s\f\2\h\9\y\5\i\6\2\d\4\g\l\5\s\q\q\0\k\w\r\5\2\4\6\v\c\x\x\v\8\r\i\e\0\2\d\k\r\6\g\p\s\i\1\6\h\w\l\z\u\t\p\a\r\r\o\a\u\1\f\5\r\b\a\a\c\1\e\o\q\w\m\g\5\b\m\n\0\t\p\s\v\i\x\q\1\x\4\e\y\3\5\x\4\l\o\7\s\1\q\e\5\z\b\k\j\c\z\4\t\h\f\i\8\n\3\7\r\h\l\c\v\1\h\w\0\0\b\d\4\i\g\q\m\9\5\v\i\q\p\u\2\y\2\s\l\c\b\b\u\i\g\v\n\6\t\v\c\y\k\e\5\z\3\o\1\a\6\7\f\1\3\u\j\r\6\f\a\o\c\p\v\2\n\9\5\e\f\q\9\h\t\z\y\0\j\h\6\p\8\y\x\5\g\z\h\h\w\d\u\f\5\v\w\4\0\l\6\c\4\r\z\3\5\g\l\g\x\q\b\8\3\o\w\d\l\s\7\e\y\n\8\l\2\0\q\e\7\l\5\y\r\w\9\6\l\9\8\7\q\3\2\8\6\d\a\w\h\q\7\x\x\5\h\m\c\v\5\s\8\8\3\e\p\e\d\k\8\s\k\m\8\m\f\3\p\a\t\g\b\u\x\q\b\e\6\x\j\d\k\d\h\i\d\j\f\r\d\w\g\e\k\q\o\e\i\f\b\u\h\d\u\g\2\l\p\7\a\3\y\f\x\b\c\i\i\0\v\c\l\l\7\y\p\k\r\v\d\3\e\c\1\5\x\9\y\s\s\t\p\s\i\9\f\9\r\m\6\a\a\i\w\l\j\x\q\4\i\8\u\o\o\7\5\v\l\d\p\0\1\v\2\o\3\v\4\2\c\q\3\s\t\1\x\y\a\b\1\3\s\0\e\9\h\o\5\m\q\c\7\0\g\m\h\g\f\y\c\u\g\3\4\c\q\u\n\8\v\s\n\c\3\9\0\e\r\z\8\u\0\a\d\b\l\2\3\1\x\e\o\q\8\f\2\m\r\j\u\n\i\q\p\u\r\3\j\x\m\6\3\4\h\f\3\j\f\8\i\a\r\5\z\i\q\2\l\u\z\1\w\9\2\i\u\4\g\7\w\h\y\z\l\8\t\0\f\7\i\0\h\1\2\x\u\p\8\3\2\4\e\v\8\b\h\s\u\4\k\u\8\l\d\y\b\c\2\g\o\q\5\k\2\t\q\i\z\g\k\i\f\c\z\7\2\v\d\n\i\7\n\g\9\4\n\0\i\s\r\p\h\q\9\b\c\n\6\0\r\i\o\h\2\5\l\9\r\m\a\t\u\t\m\p\5\1\p\3\6\k\v\i\5\i\p\i\2\k\y\v\k\l\e\x\t\k\0\h\p\u\b\4\o\4\7\x\w\y\q\0\y\3\j\a\p\5\3\s\3\o\p\s\j\1\v\j\v\p\y\h\j\u\u\o\9\6\3\4\l\a\k\b\5\r\3\t\9\o\r\b\x\8\z\r\9\u\o\p\f\d\m\u\e\j\2\d\q\j\3\o\6\z\9\t\u\m\3\l\u\b\s\j\u\n\9\1\n\2\r\8\x\g\o\n\h\s\a\u\k\y\i\n\p\h\3\6\4\q\7\k\7\a\e\0\i\l\c\j\j\d\w\b\x\f\p\c\z\i\3\j\h\n\5\o\n\y\v\2\l\a\r\9\b\c\k\r\v\8\6\9\8\k\f\p\7\b\r\o\4\p\s\a\b\6\z\n\g\7\7\4\v\2\d\y\h\i\s\5\6\2\m\f\v\3\y\n\c\5\n\s\8\c\k\w\x\3\0\8\0\j\7\6\0\w\2\p\2\g\a\q\a\3\h\p\1\g\9\y\b\e\v\z\g\3\o\b\x\k\1\z\b\6\d\n\t\1\l\s\k\e\5\p\4\j\h\y\c\7\x\d\t\8\v\c\5\i\2\q\7\h\3\4\l\0\n\6\f\v\0\d\3\7\u\e\1\a\m\r\s\9\b\t\p\n\3\0\8\m\k\u\z\5\q\4\5\g\8\u\j\5\n\o\v\z\h\o\s\k\p\2\5\i\z\4\f\e\5\6\2\b\p\3\1\h\d\w\8\f\b\g\h\a\v\k\3\4\9\n\3\v\d\g\r\2\z\f\d\v\0\q\l\g\5\z\9\6\o\i\m\w\c\s\v\l\p\b\h\l\0\l\4\e\2\p\g\l\s\7\8\y\z\7\m\q\0\n\d\t\m\j\x\z\c\v\c\y\t\7\6\c\4\l\w\a\f\d\1\p\q\r\r\9\t\v\t\9\r\k\6\4\6\a\b\8\4\p\1\y\g\6\w\h\w\m\g\0\7\8\b\v\q\w\n\f\8\c\7\m\c\3\5\e\2\p\a\3\s\r\1\4\w\k\z\y\c\y\4\2\g\1\a\g\k\c\z\8\3\p\d\p\i\x\p\i\u\9\w\9\p\q\q\6\c\g\l\6\m\o\2\p\7\0\t\s\1\9\z\c\c\n\k\w\n\4\f\g\t\j\w\o\w\u\7\1\r\o\5\1\u\0\p\o\6\u\c\5\4\r\y\j\d\9\s\f\w\k\4\p\5\d\7\0\a\6\l\b\g\5\f\p\1\h\z\w\n\n\g\2\y\9\1\u\w\o\b\b\5\z\d\g\p\c\w\l\v\7\c\n\e\n\7\r\z\9\w\r\6\1\o\z\7\j\k\q\5\o\u\0\y\0\o\m\t\5\i\m\v\y\l\h\p\i\2\1\7\y\b\5\0\t\0\4\u\f\z\i\u\i\n\j\x\p\j\3\b\8\x\g\m\5\9\z\h\j\j\c\e\e\8\w\h\g\d\k\2\2\h\2\w\l\0\o\9\l\x\i\1\3\o\3\o\5\f\s\2\o\1\u\z\m\b\6\h\5\m\c\3\4\1\r\s\y\y\v\1\k\r\8\t\8\8\g\7\v\p\u\f\1\4\i\i\4\c\5\c\y\n\r\r\j\8\i\m\4\g\o\f\c\w\b\5\x\9\v\z\h\y\v\0\t\d\e\x\5\a\o\t\c\7\o\q\3\p\d\q\6\n\u\e\w\g\o\r\a\3\u\j\4\u\b\h\y\q\y\4\j\p\c\h\0\4\f\4\g\c\4\t\3\e\e\u\o\u\j\6\e\1\t\j\c\6\9\r\5\4\i\0\f\p\s\9\3\a\h\5\9\x\i\v\v\d\t\0\h\h\l\c\8\k\l\n\d\5\6\k\b\p\s\h\8\r\0\5\k\3\d\h\e\u\6\s\q\m\t\g\9\h\d\2\y\i\p\t\c\5\9\c\k\n\2\6\q\b\5\6\0\r\v\5\i\0\1\d\8\7\u\f\d\k\x\f\3\m\x\v\a\t\0\0\c\p\l\y\b\x\8\u\b\g\e\4\6\s\l\w\h\5\y\o\m\z\7\w\o\d\u\8\9\s\a\a\5\y\n\t\r\t\t\2\9\m\v\t\i\j\1\5\k\n\o\k\m\i\n\s\h\b\4\0\c\y\y\y\i\a\8\b\o\n\i\n\r\o\4\1\7\s\u\y\7\4\t\n\e\4\k\r\k\6\k\g\j\k\y\y\e\t\m\0\v\4\g\m\l\x\6\w\i\x\n\a\4\7\d\y\1\d\v\8\q\a\s\5\w\p\t\u\6\o\d\k\e\2\x\p\a\1\n\5\o\4\5\8\t\g\6\7\f\6\u\w\w\v\t\z\k\i\2\f\7\s\e\n\i\g\y\2\i\v\0\u\f\p\1\y\e\l\7\w\0\0\a\u\r\c\h\o\n\q\o\r\s\3\l\4\q\j\v\a\3\j\p\w\8\8\j\w\1\o\1\8\9\u\b\5\p\q\d\x\e\z\o\g\k\6\y\1\2\t\j\5\x\2\4\b\h\k\7\2\p\7\k\u\x\e\d\1\1\9\1\m\t\t\4\u\r\8\3\h\z\s\s\8\m\h\m\i\w\k\y\s\j\8\c\8\2\7\w\7\x\2\d\s\p\e\9\0\5\y\r\s\i\p\8\m\w\n\c\o\t\3\2\e\0\w\w\r\1\j\o\g\5\z\1\z\f\z\m\j\u\q\o\k\5\n\h\b\8\2\7\j\p\5\8\0\i\i\x\a\i\x\e\5\k\8\2\f\b\1\3\6\g\6\a\0\4\8\x\q\2\4\y\e\1\o\b\s\v\2\2\8\z\t\d\5\n\o\x\b\j\b\3\b\a\k\b\n\l\c\5\3\a\b\k\0\o\z\v\p\j\n\a\z\4\m\k\j\m\z\a\v\i\7\n\m\f\0\v\6\v\6\a\i\e\h\5\3\x\v\a\x\8\q\2\o\c\n\0\c\l\c\c\3\s\y\i\g\t\8\r\g\0\d\z\r\m\f\x\z\u\b\p\0\u\z\x\2\3\d\p\g\o\2\r\0\k\z\5\x\5\v\d\4\u\i\c\b\c\2\j\p\v\t\m\m\m\j\v\f\8\n\0\3\m\2\h\8\r\e\a\l\8\2\7\h\h\f\1\9\o\a\7\2\p\k\9\u\5\k\9\m\6\i\n\q\o\x\a\e\g\t\q\y\n\b\3\e\s\b\b\w\l\d\k\a\l\p\v\q\g\z\h\4\a\x\j\t\a\j\g\t\y\y\6\9\4\r\c\9\a\m\z\x\g\6\i\i\b\3\j\n\l\u\c\e\p\d\3\b\4\m\k\k\h\b\u\u\i\0\v\q\q\4\0\z\b\e\m\4\t\z\q\i\j\n\z\4\j\a\s\a\y\y\z\u\h\f\m\x\d\a\c\8\8\0\w\f\s\f\d\4\7\l\5\c\6\c\d\l\p\a\5\m\t\r\j\l\2\g\7\6\2\n\t\k\s\j\v\y\p\6\g\r\u\a\k\p\e\w\5\e\p\8\4\8\1\p\c\9\r\e\x\d\0\m\u\f\b\p\t\3\a\j\3\u\b\m\7\t\a\8\0\k\k\1\f\z\j\u\h\9\m\x\2\j\6\p\1\2\n\q\i\i\q\o\c\o\9\v\1\z\i\0\4\x\m\s\d\l\a\8\b\t\y\9\p\1\e\w\v\w\x\m\s\f\z\s\x\a\z\s\q\x\f\7\c\p\7\g\v\6\c\x\v\m\e\h\8\9\u\y\k\e\v\k\x\9\5\9\5\n\0\r\y\1\5\l\8\f\3\c\c\3\w\w\b\g\p\l\l\2\t\r\3\7\8\m\r\f\x\r\c\a\f\f\t\j\l\7\3\x\d\y\v\3\p\d\9\9\0\n\4\r\5\0\7\4\8\v\q\e\t\z\n\1\j\x\q\d\v\j\t\t\w\w\0\1\u\m\q\7\5\1\a\l\k\b\k\h\m\g\y\u\j\1\3\j\t\y\t\p\v\w\q\y\t\8\c\5\u\a\o\a\6\i\y\q\d\f\9\a\z\6\u\z\u\h\n\c\q\0\d\z\t\7\b\q\g\x\7\i\x\1\p\v\6\r\5\s\v\7\5\1\g\o\u\l\6\w\t\e\d\j\t\k\k\j\d\8\0\h\2\r\4\b\p\5\z\9\k\k\p\0\4\v\2\p\w\q\l\m\d\e\p\p\6\a\5\0\v\1\g\h\e\g\f\1\k\b\h\b\u\s\p\0\8\f\y\5\h\9\o\7\2\4\k\l\d\r\w\h\h\f\s\f\8\v\y\w\0\6\x\u\1\v\l\1\y\2\t\m\c\u\7\9\x\l\y\4\z\s\7\c\9\i\w\o\e\0\c\1\q\k\q\y\c\w\x\q\m\0\6\z\5\z\v\h\6\9\6\d\y\8\u\8\r\t\m\z\a\r\2\6\f\q\n\p\m\u\f\o\f\4\0\a\1\0\a\n\t\t\p\g\y\a\u\3\4\6\r\o\j\z\j\t\t\3\5\7\c\h\c\9\9\5\b\t\j\9 ]] 00:05:55.224 00:05:55.224 real 0m1.361s 00:05:55.224 user 0m0.946s 00:05:55.224 sys 0m0.606s 00:05:55.224 20:26:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.224 20:26:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:55.224 ************************************ 00:05:55.224 20:26:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:05:55.224 20:26:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:05:55.224 20:26:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:55.224 20:26:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:55.224 20:26:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:05:55.224 20:26:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:55.224 20:26:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:05:55.224 20:26:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:05:55.224 20:26:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:55.224 20:26:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:55.224 20:26:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:55.224 [2024-11-26 20:26:55.417320] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:55.224 [2024-11-26 20:26:55.417440] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60049 ] 00:05:55.224 { 00:05:55.224 "subsystems": [ 00:05:55.224 { 00:05:55.224 "subsystem": "bdev", 00:05:55.224 "config": [ 00:05:55.224 { 00:05:55.224 "params": { 00:05:55.224 "trtype": "pcie", 00:05:55.224 "traddr": "0000:00:10.0", 00:05:55.224 "name": "Nvme0" 00:05:55.224 }, 00:05:55.224 "method": "bdev_nvme_attach_controller" 00:05:55.224 }, 00:05:55.224 { 00:05:55.224 "method": "bdev_wait_for_examine" 00:05:55.224 } 00:05:55.224 ] 00:05:55.224 } 00:05:55.224 ] 00:05:55.224 } 00:05:55.224 [2024-11-26 20:26:55.559006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.483 [2024-11-26 20:26:55.619592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.483 [2024-11-26 20:26:55.673356] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.483  [2024-11-26T20:26:56.096Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:55.741 00:05:55.741 20:26:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:55.741 ************************************ 00:05:55.741 END TEST spdk_dd_basic_rw 00:05:55.741 ************************************ 00:05:55.741 00:05:55.741 real 0m17.863s 00:05:55.741 user 0m12.842s 00:05:55.741 sys 0m6.761s 00:05:55.741 20:26:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.741 20:26:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:55.742 20:26:56 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:05:55.742 20:26:56 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.742 20:26:56 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.742 20:26:56 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:55.742 ************************************ 00:05:55.742 START TEST spdk_dd_posix 00:05:55.742 ************************************ 00:05:55.742 20:26:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:05:56.001 * Looking for test storage... 00:05:56.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:56.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.001 --rc genhtml_branch_coverage=1 00:05:56.001 --rc genhtml_function_coverage=1 00:05:56.001 --rc genhtml_legend=1 00:05:56.001 --rc geninfo_all_blocks=1 00:05:56.001 --rc geninfo_unexecuted_blocks=1 00:05:56.001 00:05:56.001 ' 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:56.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.001 --rc genhtml_branch_coverage=1 00:05:56.001 --rc genhtml_function_coverage=1 00:05:56.001 --rc genhtml_legend=1 00:05:56.001 --rc geninfo_all_blocks=1 00:05:56.001 --rc geninfo_unexecuted_blocks=1 00:05:56.001 00:05:56.001 ' 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:56.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.001 --rc genhtml_branch_coverage=1 00:05:56.001 --rc genhtml_function_coverage=1 00:05:56.001 --rc genhtml_legend=1 00:05:56.001 --rc geninfo_all_blocks=1 00:05:56.001 --rc geninfo_unexecuted_blocks=1 00:05:56.001 00:05:56.001 ' 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:56.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.001 --rc genhtml_branch_coverage=1 00:05:56.001 --rc genhtml_function_coverage=1 00:05:56.001 --rc genhtml_legend=1 00:05:56.001 --rc geninfo_all_blocks=1 00:05:56.001 --rc geninfo_unexecuted_blocks=1 00:05:56.001 00:05:56.001 ' 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:05:56.001 20:26:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:05:56.001 * First test run, liburing in use 00:05:56.002 20:26:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:05:56.002 20:26:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.002 20:26:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.002 20:26:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:56.002 ************************************ 00:05:56.002 START TEST dd_flag_append 00:05:56.002 ************************************ 00:05:56.002 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:05:56.002 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:05:56.002 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:05:56.002 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:05:56.002 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:05:56.002 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:56.002 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=ortixzhtvq11h3vwex1y724hucvs79jq 00:05:56.002 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:05:56.002 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:05:56.002 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:56.002 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=j8omkyw2znd0g2b9ir1s3koiol8bicen 00:05:56.002 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s ortixzhtvq11h3vwex1y724hucvs79jq 00:05:56.002 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s j8omkyw2znd0g2b9ir1s3koiol8bicen 00:05:56.002 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:05:56.002 [2024-11-26 20:26:56.279344] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:56.002 [2024-11-26 20:26:56.279427] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60122 ] 00:05:56.287 [2024-11-26 20:26:56.426130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.287 [2024-11-26 20:26:56.494246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.287 [2024-11-26 20:26:56.551132] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.287  [2024-11-26T20:26:56.912Z] Copying: 32/32 [B] (average 31 kBps) 00:05:56.557 00:05:56.557 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ j8omkyw2znd0g2b9ir1s3koiol8bicenortixzhtvq11h3vwex1y724hucvs79jq == \j\8\o\m\k\y\w\2\z\n\d\0\g\2\b\9\i\r\1\s\3\k\o\i\o\l\8\b\i\c\e\n\o\r\t\i\x\z\h\t\v\q\1\1\h\3\v\w\e\x\1\y\7\2\4\h\u\c\v\s\7\9\j\q ]] 00:05:56.557 00:05:56.557 real 0m0.563s 00:05:56.557 user 0m0.312s 00:05:56.557 sys 0m0.281s 00:05:56.557 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.557 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:56.557 ************************************ 00:05:56.557 END TEST dd_flag_append 00:05:56.557 ************************************ 00:05:56.557 20:26:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:05:56.557 20:26:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.557 20:26:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.557 20:26:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:56.557 ************************************ 00:05:56.557 START TEST dd_flag_directory 00:05:56.557 ************************************ 00:05:56.557 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:05:56.557 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:56.557 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:05:56.557 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:56.557 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:56.557 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.557 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:56.557 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.557 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:56.557 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.557 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:56.557 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:56.557 20:26:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:56.557 [2024-11-26 20:26:56.897343] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:56.557 [2024-11-26 20:26:56.897445] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60151 ] 00:05:56.816 [2024-11-26 20:26:57.051938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.816 [2024-11-26 20:26:57.120824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.074 [2024-11-26 20:26:57.176698] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.074 [2024-11-26 20:26:57.216880] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:57.074 [2024-11-26 20:26:57.216943] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:57.074 [2024-11-26 20:26:57.216966] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.074 [2024-11-26 20:26:57.337685] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:57.074 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:05:57.074 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.074 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:05:57.074 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:05:57.074 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:05:57.074 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.074 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:57.074 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:05:57.074 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:57.074 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:57.074 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.074 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:57.074 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.074 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:57.074 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.074 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:57.074 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:57.074 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:57.333 [2024-11-26 20:26:57.462937] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:57.333 [2024-11-26 20:26:57.463045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60160 ] 00:05:57.333 [2024-11-26 20:26:57.616109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.333 [2024-11-26 20:26:57.685968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.591 [2024-11-26 20:26:57.743615] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.591 [2024-11-26 20:26:57.787034] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:57.591 [2024-11-26 20:26:57.787088] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:57.591 [2024-11-26 20:26:57.787112] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.591 [2024-11-26 20:26:57.912760] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:57.850 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:05:57.850 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.850 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:05:57.850 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:05:57.850 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:05:57.850 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.850 00:05:57.850 real 0m1.140s 00:05:57.850 user 0m0.641s 00:05:57.850 sys 0m0.287s 00:05:57.850 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.850 20:26:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:05:57.850 ************************************ 00:05:57.850 END TEST dd_flag_directory 00:05:57.850 ************************************ 00:05:57.850 20:26:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:05:57.850 20:26:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.850 20:26:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.850 20:26:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:57.850 ************************************ 00:05:57.850 START TEST dd_flag_nofollow 00:05:57.850 ************************************ 00:05:57.850 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:05:57.850 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:57.850 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:57.850 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:57.850 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:57.850 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:57.850 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:05:57.850 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:57.850 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:57.850 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.850 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:57.850 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.850 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:57.850 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.851 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:57.851 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:57.851 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:57.851 [2024-11-26 20:26:58.088063] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:57.851 [2024-11-26 20:26:58.088170] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60189 ] 00:05:58.119 [2024-11-26 20:26:58.238981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.119 [2024-11-26 20:26:58.308140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.119 [2024-11-26 20:26:58.365015] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.119 [2024-11-26 20:26:58.408017] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:58.119 [2024-11-26 20:26:58.408092] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:58.119 [2024-11-26 20:26:58.408116] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:58.378 [2024-11-26 20:26:58.528466] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:58.378 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:05:58.378 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:58.378 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:05:58.378 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:05:58.378 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:05:58.378 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:58.378 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:58.378 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:05:58.378 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:58.378 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:58.378 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.379 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:58.379 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.379 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:58.379 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.379 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:58.379 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:58.379 20:26:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:58.379 [2024-11-26 20:26:58.648979] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:58.379 [2024-11-26 20:26:58.649075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60198 ] 00:05:58.638 [2024-11-26 20:26:58.795458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.638 [2024-11-26 20:26:58.855593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.638 [2024-11-26 20:26:58.910623] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.638 [2024-11-26 20:26:58.953559] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:58.638 [2024-11-26 20:26:58.953636] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:58.638 [2024-11-26 20:26:58.953664] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:58.897 [2024-11-26 20:26:59.077446] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:58.897 20:26:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:05:58.897 20:26:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:58.898 20:26:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:05:58.898 20:26:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:05:58.898 20:26:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:05:58.898 20:26:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:58.898 20:26:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:05:58.898 20:26:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:05:58.898 20:26:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:05:58.898 20:26:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:58.898 [2024-11-26 20:26:59.192543] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:05:58.898 [2024-11-26 20:26:59.192634] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60210 ] 00:05:59.156 [2024-11-26 20:26:59.341081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.156 [2024-11-26 20:26:59.408051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.156 [2024-11-26 20:26:59.465351] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.156  [2024-11-26T20:26:59.769Z] Copying: 512/512 [B] (average 500 kBps) 00:05:59.414 00:05:59.414 20:26:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ ym0js8edgab5oegzfg1017talhqif6q8z0kjcs7dk10i56g3trx9z5yu6lphmdasjvjuoj9kfp1zd1ami3yb0jpkmqyl9l0jsmj865bkrz4hjy6w1dhoxlpk8r6z5rvynfz51cv048jn3n7rihk5nj8y6i45hi86u5tgfi5uf3uyx1t9bv6atsq4uncv0w9zrqo0hzmu21pxnteua72mznweicyihsdh5bcei27q4yvkffa42p69ekiikevytuikycj0faqjih2feiw3mfo68yk9xfg2dewnl8cag20o1wmap5jqen1s0rqyuwk4mgrr8orgsi0xyazk3vxokkem9slgka37nu6r5ctcbq85e7ux3yeyddtxd1b1ea6do2bah8vr16wgskroktqcf26vsiujli0avi4phypph41vf7c0gxs26r03xzpsm68jzjyklray7l3xxc00803mn7qgipmwo2beob87yd20lrkzkerw579mqt3o4vxai2x9lny0 == \y\m\0\j\s\8\e\d\g\a\b\5\o\e\g\z\f\g\1\0\1\7\t\a\l\h\q\i\f\6\q\8\z\0\k\j\c\s\7\d\k\1\0\i\5\6\g\3\t\r\x\9\z\5\y\u\6\l\p\h\m\d\a\s\j\v\j\u\o\j\9\k\f\p\1\z\d\1\a\m\i\3\y\b\0\j\p\k\m\q\y\l\9\l\0\j\s\m\j\8\6\5\b\k\r\z\4\h\j\y\6\w\1\d\h\o\x\l\p\k\8\r\6\z\5\r\v\y\n\f\z\5\1\c\v\0\4\8\j\n\3\n\7\r\i\h\k\5\n\j\8\y\6\i\4\5\h\i\8\6\u\5\t\g\f\i\5\u\f\3\u\y\x\1\t\9\b\v\6\a\t\s\q\4\u\n\c\v\0\w\9\z\r\q\o\0\h\z\m\u\2\1\p\x\n\t\e\u\a\7\2\m\z\n\w\e\i\c\y\i\h\s\d\h\5\b\c\e\i\2\7\q\4\y\v\k\f\f\a\4\2\p\6\9\e\k\i\i\k\e\v\y\t\u\i\k\y\c\j\0\f\a\q\j\i\h\2\f\e\i\w\3\m\f\o\6\8\y\k\9\x\f\g\2\d\e\w\n\l\8\c\a\g\2\0\o\1\w\m\a\p\5\j\q\e\n\1\s\0\r\q\y\u\w\k\4\m\g\r\r\8\o\r\g\s\i\0\x\y\a\z\k\3\v\x\o\k\k\e\m\9\s\l\g\k\a\3\7\n\u\6\r\5\c\t\c\b\q\8\5\e\7\u\x\3\y\e\y\d\d\t\x\d\1\b\1\e\a\6\d\o\2\b\a\h\8\v\r\1\6\w\g\s\k\r\o\k\t\q\c\f\2\6\v\s\i\u\j\l\i\0\a\v\i\4\p\h\y\p\p\h\4\1\v\f\7\c\0\g\x\s\2\6\r\0\3\x\z\p\s\m\6\8\j\z\j\y\k\l\r\a\y\7\l\3\x\x\c\0\0\8\0\3\m\n\7\q\g\i\p\m\w\o\2\b\e\o\b\8\7\y\d\2\0\l\r\k\z\k\e\r\w\5\7\9\m\q\t\3\o\4\v\x\a\i\2\x\9\l\n\y\0 ]] 00:05:59.414 00:05:59.414 real 0m1.664s 00:05:59.414 user 0m0.922s 00:05:59.414 sys 0m0.556s 00:05:59.414 20:26:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.414 20:26:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:05:59.414 ************************************ 00:05:59.414 END TEST dd_flag_nofollow 00:05:59.414 ************************************ 00:05:59.414 20:26:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:05:59.414 20:26:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.414 20:26:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.414 20:26:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:59.414 ************************************ 00:05:59.414 START TEST dd_flag_noatime 00:05:59.414 ************************************ 00:05:59.414 20:26:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:05:59.414 20:26:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:05:59.414 20:26:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:05:59.414 20:26:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:05:59.414 20:26:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:05:59.414 20:26:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:05:59.414 20:26:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:59.415 20:26:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1732652819 00:05:59.415 20:26:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:59.415 20:26:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1732652819 00:05:59.415 20:26:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:00.852 20:27:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:00.852 [2024-11-26 20:27:00.818898] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:00.852 [2024-11-26 20:27:00.819004] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60248 ] 00:06:00.852 [2024-11-26 20:27:00.963330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.852 [2024-11-26 20:27:01.022671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.852 [2024-11-26 20:27:01.078035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.852  [2024-11-26T20:27:01.465Z] Copying: 512/512 [B] (average 500 kBps) 00:06:01.110 00:06:01.110 20:27:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:01.110 20:27:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1732652819 )) 00:06:01.110 20:27:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:01.110 20:27:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1732652819 )) 00:06:01.111 20:27:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:01.111 [2024-11-26 20:27:01.357647] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:01.111 [2024-11-26 20:27:01.357751] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60269 ] 00:06:01.369 [2024-11-26 20:27:01.505121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.369 [2024-11-26 20:27:01.567392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.369 [2024-11-26 20:27:01.621986] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.369  [2024-11-26T20:27:01.982Z] Copying: 512/512 [B] (average 500 kBps) 00:06:01.627 00:06:01.627 20:27:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:01.627 20:27:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1732652821 )) 00:06:01.627 00:06:01.627 real 0m2.110s 00:06:01.627 user 0m0.597s 00:06:01.627 sys 0m0.556s 00:06:01.627 20:27:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.627 20:27:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:01.627 ************************************ 00:06:01.627 END TEST dd_flag_noatime 00:06:01.627 ************************************ 00:06:01.627 20:27:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:01.627 20:27:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.627 20:27:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.627 20:27:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:01.627 ************************************ 00:06:01.627 START TEST dd_flags_misc 00:06:01.627 ************************************ 00:06:01.627 20:27:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:06:01.627 20:27:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:01.627 20:27:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:01.627 20:27:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:01.627 20:27:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:01.627 20:27:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:01.627 20:27:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:01.627 20:27:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:01.627 20:27:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:01.627 20:27:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:01.627 [2024-11-26 20:27:01.960300] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:01.627 [2024-11-26 20:27:01.960414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60298 ] 00:06:01.886 [2024-11-26 20:27:02.125066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.886 [2024-11-26 20:27:02.201712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.145 [2024-11-26 20:27:02.255459] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.145  [2024-11-26T20:27:02.500Z] Copying: 512/512 [B] (average 500 kBps) 00:06:02.145 00:06:02.145 20:27:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xnp39226zycyqav0rhmwbnn455yvggzet1z2vzq3zhsiicvz2ujlvjwhp07bwn0n15yofpvvevo1zqgi7ebl0e1dvtb7xocijbc5eja0zs3vf2bo5smuzr6k0em29hjwbnxdc8gzpsooo2vh5scnygynzbm97qycytg90d364m51xyucecsxvw0a5qkat5t4m1ulw3poyxk08chnul5y3h0sb7deh5kwj6wlv73keveod0a8nx090qpqkdxxnp3lx9tjrzfjyml6gordzdk9zumtzxjag2pbngargeb2wr3q7mpeuqqtya2qrs03ymea96om0up3k97ll5kud3qal5csj1kshhsu0rd7tndhlsmap8dp91h1a933ejz25m82lmz7zlj750z62i56svctxsqcm9c3fhefdd629hor2yidm714ek2irq93p0680gz18ctw1ftygi4akxd7h3l2kkm6jx2oydai4784fegbzuol1vck592thf93wem3n7j9 == \x\n\p\3\9\2\2\6\z\y\c\y\q\a\v\0\r\h\m\w\b\n\n\4\5\5\y\v\g\g\z\e\t\1\z\2\v\z\q\3\z\h\s\i\i\c\v\z\2\u\j\l\v\j\w\h\p\0\7\b\w\n\0\n\1\5\y\o\f\p\v\v\e\v\o\1\z\q\g\i\7\e\b\l\0\e\1\d\v\t\b\7\x\o\c\i\j\b\c\5\e\j\a\0\z\s\3\v\f\2\b\o\5\s\m\u\z\r\6\k\0\e\m\2\9\h\j\w\b\n\x\d\c\8\g\z\p\s\o\o\o\2\v\h\5\s\c\n\y\g\y\n\z\b\m\9\7\q\y\c\y\t\g\9\0\d\3\6\4\m\5\1\x\y\u\c\e\c\s\x\v\w\0\a\5\q\k\a\t\5\t\4\m\1\u\l\w\3\p\o\y\x\k\0\8\c\h\n\u\l\5\y\3\h\0\s\b\7\d\e\h\5\k\w\j\6\w\l\v\7\3\k\e\v\e\o\d\0\a\8\n\x\0\9\0\q\p\q\k\d\x\x\n\p\3\l\x\9\t\j\r\z\f\j\y\m\l\6\g\o\r\d\z\d\k\9\z\u\m\t\z\x\j\a\g\2\p\b\n\g\a\r\g\e\b\2\w\r\3\q\7\m\p\e\u\q\q\t\y\a\2\q\r\s\0\3\y\m\e\a\9\6\o\m\0\u\p\3\k\9\7\l\l\5\k\u\d\3\q\a\l\5\c\s\j\1\k\s\h\h\s\u\0\r\d\7\t\n\d\h\l\s\m\a\p\8\d\p\9\1\h\1\a\9\3\3\e\j\z\2\5\m\8\2\l\m\z\7\z\l\j\7\5\0\z\6\2\i\5\6\s\v\c\t\x\s\q\c\m\9\c\3\f\h\e\f\d\d\6\2\9\h\o\r\2\y\i\d\m\7\1\4\e\k\2\i\r\q\9\3\p\0\6\8\0\g\z\1\8\c\t\w\1\f\t\y\g\i\4\a\k\x\d\7\h\3\l\2\k\k\m\6\j\x\2\o\y\d\a\i\4\7\8\4\f\e\g\b\z\u\o\l\1\v\c\k\5\9\2\t\h\f\9\3\w\e\m\3\n\7\j\9 ]] 00:06:02.145 20:27:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:02.145 20:27:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:02.404 [2024-11-26 20:27:02.527863] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:02.404 [2024-11-26 20:27:02.527964] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60307 ] 00:06:02.404 [2024-11-26 20:27:02.673630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.404 [2024-11-26 20:27:02.733664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.663 [2024-11-26 20:27:02.786293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.663  [2024-11-26T20:27:03.018Z] Copying: 512/512 [B] (average 500 kBps) 00:06:02.663 00:06:02.663 20:27:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xnp39226zycyqav0rhmwbnn455yvggzet1z2vzq3zhsiicvz2ujlvjwhp07bwn0n15yofpvvevo1zqgi7ebl0e1dvtb7xocijbc5eja0zs3vf2bo5smuzr6k0em29hjwbnxdc8gzpsooo2vh5scnygynzbm97qycytg90d364m51xyucecsxvw0a5qkat5t4m1ulw3poyxk08chnul5y3h0sb7deh5kwj6wlv73keveod0a8nx090qpqkdxxnp3lx9tjrzfjyml6gordzdk9zumtzxjag2pbngargeb2wr3q7mpeuqqtya2qrs03ymea96om0up3k97ll5kud3qal5csj1kshhsu0rd7tndhlsmap8dp91h1a933ejz25m82lmz7zlj750z62i56svctxsqcm9c3fhefdd629hor2yidm714ek2irq93p0680gz18ctw1ftygi4akxd7h3l2kkm6jx2oydai4784fegbzuol1vck592thf93wem3n7j9 == \x\n\p\3\9\2\2\6\z\y\c\y\q\a\v\0\r\h\m\w\b\n\n\4\5\5\y\v\g\g\z\e\t\1\z\2\v\z\q\3\z\h\s\i\i\c\v\z\2\u\j\l\v\j\w\h\p\0\7\b\w\n\0\n\1\5\y\o\f\p\v\v\e\v\o\1\z\q\g\i\7\e\b\l\0\e\1\d\v\t\b\7\x\o\c\i\j\b\c\5\e\j\a\0\z\s\3\v\f\2\b\o\5\s\m\u\z\r\6\k\0\e\m\2\9\h\j\w\b\n\x\d\c\8\g\z\p\s\o\o\o\2\v\h\5\s\c\n\y\g\y\n\z\b\m\9\7\q\y\c\y\t\g\9\0\d\3\6\4\m\5\1\x\y\u\c\e\c\s\x\v\w\0\a\5\q\k\a\t\5\t\4\m\1\u\l\w\3\p\o\y\x\k\0\8\c\h\n\u\l\5\y\3\h\0\s\b\7\d\e\h\5\k\w\j\6\w\l\v\7\3\k\e\v\e\o\d\0\a\8\n\x\0\9\0\q\p\q\k\d\x\x\n\p\3\l\x\9\t\j\r\z\f\j\y\m\l\6\g\o\r\d\z\d\k\9\z\u\m\t\z\x\j\a\g\2\p\b\n\g\a\r\g\e\b\2\w\r\3\q\7\m\p\e\u\q\q\t\y\a\2\q\r\s\0\3\y\m\e\a\9\6\o\m\0\u\p\3\k\9\7\l\l\5\k\u\d\3\q\a\l\5\c\s\j\1\k\s\h\h\s\u\0\r\d\7\t\n\d\h\l\s\m\a\p\8\d\p\9\1\h\1\a\9\3\3\e\j\z\2\5\m\8\2\l\m\z\7\z\l\j\7\5\0\z\6\2\i\5\6\s\v\c\t\x\s\q\c\m\9\c\3\f\h\e\f\d\d\6\2\9\h\o\r\2\y\i\d\m\7\1\4\e\k\2\i\r\q\9\3\p\0\6\8\0\g\z\1\8\c\t\w\1\f\t\y\g\i\4\a\k\x\d\7\h\3\l\2\k\k\m\6\j\x\2\o\y\d\a\i\4\7\8\4\f\e\g\b\z\u\o\l\1\v\c\k\5\9\2\t\h\f\9\3\w\e\m\3\n\7\j\9 ]] 00:06:02.663 20:27:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:02.663 20:27:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:02.922 [2024-11-26 20:27:03.050735] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:02.922 [2024-11-26 20:27:03.050829] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60318 ] 00:06:02.922 [2024-11-26 20:27:03.191143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.922 [2024-11-26 20:27:03.250873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.180 [2024-11-26 20:27:03.304444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.180  [2024-11-26T20:27:03.535Z] Copying: 512/512 [B] (average 100 kBps) 00:06:03.180 00:06:03.439 20:27:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xnp39226zycyqav0rhmwbnn455yvggzet1z2vzq3zhsiicvz2ujlvjwhp07bwn0n15yofpvvevo1zqgi7ebl0e1dvtb7xocijbc5eja0zs3vf2bo5smuzr6k0em29hjwbnxdc8gzpsooo2vh5scnygynzbm97qycytg90d364m51xyucecsxvw0a5qkat5t4m1ulw3poyxk08chnul5y3h0sb7deh5kwj6wlv73keveod0a8nx090qpqkdxxnp3lx9tjrzfjyml6gordzdk9zumtzxjag2pbngargeb2wr3q7mpeuqqtya2qrs03ymea96om0up3k97ll5kud3qal5csj1kshhsu0rd7tndhlsmap8dp91h1a933ejz25m82lmz7zlj750z62i56svctxsqcm9c3fhefdd629hor2yidm714ek2irq93p0680gz18ctw1ftygi4akxd7h3l2kkm6jx2oydai4784fegbzuol1vck592thf93wem3n7j9 == \x\n\p\3\9\2\2\6\z\y\c\y\q\a\v\0\r\h\m\w\b\n\n\4\5\5\y\v\g\g\z\e\t\1\z\2\v\z\q\3\z\h\s\i\i\c\v\z\2\u\j\l\v\j\w\h\p\0\7\b\w\n\0\n\1\5\y\o\f\p\v\v\e\v\o\1\z\q\g\i\7\e\b\l\0\e\1\d\v\t\b\7\x\o\c\i\j\b\c\5\e\j\a\0\z\s\3\v\f\2\b\o\5\s\m\u\z\r\6\k\0\e\m\2\9\h\j\w\b\n\x\d\c\8\g\z\p\s\o\o\o\2\v\h\5\s\c\n\y\g\y\n\z\b\m\9\7\q\y\c\y\t\g\9\0\d\3\6\4\m\5\1\x\y\u\c\e\c\s\x\v\w\0\a\5\q\k\a\t\5\t\4\m\1\u\l\w\3\p\o\y\x\k\0\8\c\h\n\u\l\5\y\3\h\0\s\b\7\d\e\h\5\k\w\j\6\w\l\v\7\3\k\e\v\e\o\d\0\a\8\n\x\0\9\0\q\p\q\k\d\x\x\n\p\3\l\x\9\t\j\r\z\f\j\y\m\l\6\g\o\r\d\z\d\k\9\z\u\m\t\z\x\j\a\g\2\p\b\n\g\a\r\g\e\b\2\w\r\3\q\7\m\p\e\u\q\q\t\y\a\2\q\r\s\0\3\y\m\e\a\9\6\o\m\0\u\p\3\k\9\7\l\l\5\k\u\d\3\q\a\l\5\c\s\j\1\k\s\h\h\s\u\0\r\d\7\t\n\d\h\l\s\m\a\p\8\d\p\9\1\h\1\a\9\3\3\e\j\z\2\5\m\8\2\l\m\z\7\z\l\j\7\5\0\z\6\2\i\5\6\s\v\c\t\x\s\q\c\m\9\c\3\f\h\e\f\d\d\6\2\9\h\o\r\2\y\i\d\m\7\1\4\e\k\2\i\r\q\9\3\p\0\6\8\0\g\z\1\8\c\t\w\1\f\t\y\g\i\4\a\k\x\d\7\h\3\l\2\k\k\m\6\j\x\2\o\y\d\a\i\4\7\8\4\f\e\g\b\z\u\o\l\1\v\c\k\5\9\2\t\h\f\9\3\w\e\m\3\n\7\j\9 ]] 00:06:03.439 20:27:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:03.439 20:27:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:03.439 [2024-11-26 20:27:03.590814] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:03.439 [2024-11-26 20:27:03.590937] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60327 ] 00:06:03.439 [2024-11-26 20:27:03.735150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.697 [2024-11-26 20:27:03.797679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.697 [2024-11-26 20:27:03.850531] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.697  [2024-11-26T20:27:04.311Z] Copying: 512/512 [B] (average 250 kBps) 00:06:03.956 00:06:03.956 20:27:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xnp39226zycyqav0rhmwbnn455yvggzet1z2vzq3zhsiicvz2ujlvjwhp07bwn0n15yofpvvevo1zqgi7ebl0e1dvtb7xocijbc5eja0zs3vf2bo5smuzr6k0em29hjwbnxdc8gzpsooo2vh5scnygynzbm97qycytg90d364m51xyucecsxvw0a5qkat5t4m1ulw3poyxk08chnul5y3h0sb7deh5kwj6wlv73keveod0a8nx090qpqkdxxnp3lx9tjrzfjyml6gordzdk9zumtzxjag2pbngargeb2wr3q7mpeuqqtya2qrs03ymea96om0up3k97ll5kud3qal5csj1kshhsu0rd7tndhlsmap8dp91h1a933ejz25m82lmz7zlj750z62i56svctxsqcm9c3fhefdd629hor2yidm714ek2irq93p0680gz18ctw1ftygi4akxd7h3l2kkm6jx2oydai4784fegbzuol1vck592thf93wem3n7j9 == \x\n\p\3\9\2\2\6\z\y\c\y\q\a\v\0\r\h\m\w\b\n\n\4\5\5\y\v\g\g\z\e\t\1\z\2\v\z\q\3\z\h\s\i\i\c\v\z\2\u\j\l\v\j\w\h\p\0\7\b\w\n\0\n\1\5\y\o\f\p\v\v\e\v\o\1\z\q\g\i\7\e\b\l\0\e\1\d\v\t\b\7\x\o\c\i\j\b\c\5\e\j\a\0\z\s\3\v\f\2\b\o\5\s\m\u\z\r\6\k\0\e\m\2\9\h\j\w\b\n\x\d\c\8\g\z\p\s\o\o\o\2\v\h\5\s\c\n\y\g\y\n\z\b\m\9\7\q\y\c\y\t\g\9\0\d\3\6\4\m\5\1\x\y\u\c\e\c\s\x\v\w\0\a\5\q\k\a\t\5\t\4\m\1\u\l\w\3\p\o\y\x\k\0\8\c\h\n\u\l\5\y\3\h\0\s\b\7\d\e\h\5\k\w\j\6\w\l\v\7\3\k\e\v\e\o\d\0\a\8\n\x\0\9\0\q\p\q\k\d\x\x\n\p\3\l\x\9\t\j\r\z\f\j\y\m\l\6\g\o\r\d\z\d\k\9\z\u\m\t\z\x\j\a\g\2\p\b\n\g\a\r\g\e\b\2\w\r\3\q\7\m\p\e\u\q\q\t\y\a\2\q\r\s\0\3\y\m\e\a\9\6\o\m\0\u\p\3\k\9\7\l\l\5\k\u\d\3\q\a\l\5\c\s\j\1\k\s\h\h\s\u\0\r\d\7\t\n\d\h\l\s\m\a\p\8\d\p\9\1\h\1\a\9\3\3\e\j\z\2\5\m\8\2\l\m\z\7\z\l\j\7\5\0\z\6\2\i\5\6\s\v\c\t\x\s\q\c\m\9\c\3\f\h\e\f\d\d\6\2\9\h\o\r\2\y\i\d\m\7\1\4\e\k\2\i\r\q\9\3\p\0\6\8\0\g\z\1\8\c\t\w\1\f\t\y\g\i\4\a\k\x\d\7\h\3\l\2\k\k\m\6\j\x\2\o\y\d\a\i\4\7\8\4\f\e\g\b\z\u\o\l\1\v\c\k\5\9\2\t\h\f\9\3\w\e\m\3\n\7\j\9 ]] 00:06:03.956 20:27:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:03.956 20:27:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:03.956 20:27:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:03.956 20:27:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:03.956 20:27:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:03.956 20:27:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:03.956 [2024-11-26 20:27:04.124872] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:03.956 [2024-11-26 20:27:04.124956] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60337 ] 00:06:03.956 [2024-11-26 20:27:04.266548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.273 [2024-11-26 20:27:04.328393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.273 [2024-11-26 20:27:04.382194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:04.273  [2024-11-26T20:27:04.628Z] Copying: 512/512 [B] (average 500 kBps) 00:06:04.273 00:06:04.273 20:27:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ l6g7gv19k3g5cmdx77qlc1lh32wribo90hgeyrtfn0nx17zmdlvc1z61harpfto6m3jo31ehe2jtr4zig7yy4n2bdbq52wsnv45iyqoyhd7qbxv9rqibr8dvw2s87ewczvph6sbvvwuudlihsrkts0don1uli1oftfuqlp9nb4nyr581xqftknqign064pcswvgec45wk7jfpp3ptfsqf75u9ws694tbqrzxhuf2ck46ljmqrg49xk4p1dkerc2tndllderwle7zuhj10op47dee6xa7o11nzobwvw7saku9d35n6y61peujbss21bl63mhapkxo4v5xomaa2asidggaq1f7aeekbjr58z5un4qmz4pskcbyofjwwmj024ufj021to51d72lrcaqkppiqf08sk9498y95sod48vycze2huhuyv0dvvvzxjq7ghea6wf8c8kuqir6xbh0gwb4jcf0awekjq1ftzzybp2n1082v28ettwox5vrnofdhh69 == \l\6\g\7\g\v\1\9\k\3\g\5\c\m\d\x\7\7\q\l\c\1\l\h\3\2\w\r\i\b\o\9\0\h\g\e\y\r\t\f\n\0\n\x\1\7\z\m\d\l\v\c\1\z\6\1\h\a\r\p\f\t\o\6\m\3\j\o\3\1\e\h\e\2\j\t\r\4\z\i\g\7\y\y\4\n\2\b\d\b\q\5\2\w\s\n\v\4\5\i\y\q\o\y\h\d\7\q\b\x\v\9\r\q\i\b\r\8\d\v\w\2\s\8\7\e\w\c\z\v\p\h\6\s\b\v\v\w\u\u\d\l\i\h\s\r\k\t\s\0\d\o\n\1\u\l\i\1\o\f\t\f\u\q\l\p\9\n\b\4\n\y\r\5\8\1\x\q\f\t\k\n\q\i\g\n\0\6\4\p\c\s\w\v\g\e\c\4\5\w\k\7\j\f\p\p\3\p\t\f\s\q\f\7\5\u\9\w\s\6\9\4\t\b\q\r\z\x\h\u\f\2\c\k\4\6\l\j\m\q\r\g\4\9\x\k\4\p\1\d\k\e\r\c\2\t\n\d\l\l\d\e\r\w\l\e\7\z\u\h\j\1\0\o\p\4\7\d\e\e\6\x\a\7\o\1\1\n\z\o\b\w\v\w\7\s\a\k\u\9\d\3\5\n\6\y\6\1\p\e\u\j\b\s\s\2\1\b\l\6\3\m\h\a\p\k\x\o\4\v\5\x\o\m\a\a\2\a\s\i\d\g\g\a\q\1\f\7\a\e\e\k\b\j\r\5\8\z\5\u\n\4\q\m\z\4\p\s\k\c\b\y\o\f\j\w\w\m\j\0\2\4\u\f\j\0\2\1\t\o\5\1\d\7\2\l\r\c\a\q\k\p\p\i\q\f\0\8\s\k\9\4\9\8\y\9\5\s\o\d\4\8\v\y\c\z\e\2\h\u\h\u\y\v\0\d\v\v\v\z\x\j\q\7\g\h\e\a\6\w\f\8\c\8\k\u\q\i\r\6\x\b\h\0\g\w\b\4\j\c\f\0\a\w\e\k\j\q\1\f\t\z\z\y\b\p\2\n\1\0\8\2\v\2\8\e\t\t\w\o\x\5\v\r\n\o\f\d\h\h\6\9 ]] 00:06:04.273 20:27:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:04.273 20:27:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:04.534 [2024-11-26 20:27:04.657628] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:04.534 [2024-11-26 20:27:04.657725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60346 ] 00:06:04.534 [2024-11-26 20:27:04.804699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.534 [2024-11-26 20:27:04.866347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.793 [2024-11-26 20:27:04.919456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:04.793  [2024-11-26T20:27:05.148Z] Copying: 512/512 [B] (average 500 kBps) 00:06:04.793 00:06:05.052 20:27:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ l6g7gv19k3g5cmdx77qlc1lh32wribo90hgeyrtfn0nx17zmdlvc1z61harpfto6m3jo31ehe2jtr4zig7yy4n2bdbq52wsnv45iyqoyhd7qbxv9rqibr8dvw2s87ewczvph6sbvvwuudlihsrkts0don1uli1oftfuqlp9nb4nyr581xqftknqign064pcswvgec45wk7jfpp3ptfsqf75u9ws694tbqrzxhuf2ck46ljmqrg49xk4p1dkerc2tndllderwle7zuhj10op47dee6xa7o11nzobwvw7saku9d35n6y61peujbss21bl63mhapkxo4v5xomaa2asidggaq1f7aeekbjr58z5un4qmz4pskcbyofjwwmj024ufj021to51d72lrcaqkppiqf08sk9498y95sod48vycze2huhuyv0dvvvzxjq7ghea6wf8c8kuqir6xbh0gwb4jcf0awekjq1ftzzybp2n1082v28ettwox5vrnofdhh69 == \l\6\g\7\g\v\1\9\k\3\g\5\c\m\d\x\7\7\q\l\c\1\l\h\3\2\w\r\i\b\o\9\0\h\g\e\y\r\t\f\n\0\n\x\1\7\z\m\d\l\v\c\1\z\6\1\h\a\r\p\f\t\o\6\m\3\j\o\3\1\e\h\e\2\j\t\r\4\z\i\g\7\y\y\4\n\2\b\d\b\q\5\2\w\s\n\v\4\5\i\y\q\o\y\h\d\7\q\b\x\v\9\r\q\i\b\r\8\d\v\w\2\s\8\7\e\w\c\z\v\p\h\6\s\b\v\v\w\u\u\d\l\i\h\s\r\k\t\s\0\d\o\n\1\u\l\i\1\o\f\t\f\u\q\l\p\9\n\b\4\n\y\r\5\8\1\x\q\f\t\k\n\q\i\g\n\0\6\4\p\c\s\w\v\g\e\c\4\5\w\k\7\j\f\p\p\3\p\t\f\s\q\f\7\5\u\9\w\s\6\9\4\t\b\q\r\z\x\h\u\f\2\c\k\4\6\l\j\m\q\r\g\4\9\x\k\4\p\1\d\k\e\r\c\2\t\n\d\l\l\d\e\r\w\l\e\7\z\u\h\j\1\0\o\p\4\7\d\e\e\6\x\a\7\o\1\1\n\z\o\b\w\v\w\7\s\a\k\u\9\d\3\5\n\6\y\6\1\p\e\u\j\b\s\s\2\1\b\l\6\3\m\h\a\p\k\x\o\4\v\5\x\o\m\a\a\2\a\s\i\d\g\g\a\q\1\f\7\a\e\e\k\b\j\r\5\8\z\5\u\n\4\q\m\z\4\p\s\k\c\b\y\o\f\j\w\w\m\j\0\2\4\u\f\j\0\2\1\t\o\5\1\d\7\2\l\r\c\a\q\k\p\p\i\q\f\0\8\s\k\9\4\9\8\y\9\5\s\o\d\4\8\v\y\c\z\e\2\h\u\h\u\y\v\0\d\v\v\v\z\x\j\q\7\g\h\e\a\6\w\f\8\c\8\k\u\q\i\r\6\x\b\h\0\g\w\b\4\j\c\f\0\a\w\e\k\j\q\1\f\t\z\z\y\b\p\2\n\1\0\8\2\v\2\8\e\t\t\w\o\x\5\v\r\n\o\f\d\h\h\6\9 ]] 00:06:05.052 20:27:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:05.052 20:27:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:05.052 [2024-11-26 20:27:05.205556] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:05.053 [2024-11-26 20:27:05.205683] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60356 ] 00:06:05.053 [2024-11-26 20:27:05.354346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.312 [2024-11-26 20:27:05.411895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.312 [2024-11-26 20:27:05.466750] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.312  [2024-11-26T20:27:05.926Z] Copying: 512/512 [B] (average 250 kBps) 00:06:05.572 00:06:05.572 20:27:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ l6g7gv19k3g5cmdx77qlc1lh32wribo90hgeyrtfn0nx17zmdlvc1z61harpfto6m3jo31ehe2jtr4zig7yy4n2bdbq52wsnv45iyqoyhd7qbxv9rqibr8dvw2s87ewczvph6sbvvwuudlihsrkts0don1uli1oftfuqlp9nb4nyr581xqftknqign064pcswvgec45wk7jfpp3ptfsqf75u9ws694tbqrzxhuf2ck46ljmqrg49xk4p1dkerc2tndllderwle7zuhj10op47dee6xa7o11nzobwvw7saku9d35n6y61peujbss21bl63mhapkxo4v5xomaa2asidggaq1f7aeekbjr58z5un4qmz4pskcbyofjwwmj024ufj021to51d72lrcaqkppiqf08sk9498y95sod48vycze2huhuyv0dvvvzxjq7ghea6wf8c8kuqir6xbh0gwb4jcf0awekjq1ftzzybp2n1082v28ettwox5vrnofdhh69 == \l\6\g\7\g\v\1\9\k\3\g\5\c\m\d\x\7\7\q\l\c\1\l\h\3\2\w\r\i\b\o\9\0\h\g\e\y\r\t\f\n\0\n\x\1\7\z\m\d\l\v\c\1\z\6\1\h\a\r\p\f\t\o\6\m\3\j\o\3\1\e\h\e\2\j\t\r\4\z\i\g\7\y\y\4\n\2\b\d\b\q\5\2\w\s\n\v\4\5\i\y\q\o\y\h\d\7\q\b\x\v\9\r\q\i\b\r\8\d\v\w\2\s\8\7\e\w\c\z\v\p\h\6\s\b\v\v\w\u\u\d\l\i\h\s\r\k\t\s\0\d\o\n\1\u\l\i\1\o\f\t\f\u\q\l\p\9\n\b\4\n\y\r\5\8\1\x\q\f\t\k\n\q\i\g\n\0\6\4\p\c\s\w\v\g\e\c\4\5\w\k\7\j\f\p\p\3\p\t\f\s\q\f\7\5\u\9\w\s\6\9\4\t\b\q\r\z\x\h\u\f\2\c\k\4\6\l\j\m\q\r\g\4\9\x\k\4\p\1\d\k\e\r\c\2\t\n\d\l\l\d\e\r\w\l\e\7\z\u\h\j\1\0\o\p\4\7\d\e\e\6\x\a\7\o\1\1\n\z\o\b\w\v\w\7\s\a\k\u\9\d\3\5\n\6\y\6\1\p\e\u\j\b\s\s\2\1\b\l\6\3\m\h\a\p\k\x\o\4\v\5\x\o\m\a\a\2\a\s\i\d\g\g\a\q\1\f\7\a\e\e\k\b\j\r\5\8\z\5\u\n\4\q\m\z\4\p\s\k\c\b\y\o\f\j\w\w\m\j\0\2\4\u\f\j\0\2\1\t\o\5\1\d\7\2\l\r\c\a\q\k\p\p\i\q\f\0\8\s\k\9\4\9\8\y\9\5\s\o\d\4\8\v\y\c\z\e\2\h\u\h\u\y\v\0\d\v\v\v\z\x\j\q\7\g\h\e\a\6\w\f\8\c\8\k\u\q\i\r\6\x\b\h\0\g\w\b\4\j\c\f\0\a\w\e\k\j\q\1\f\t\z\z\y\b\p\2\n\1\0\8\2\v\2\8\e\t\t\w\o\x\5\v\r\n\o\f\d\h\h\6\9 ]] 00:06:05.572 20:27:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:05.572 20:27:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:05.572 [2024-11-26 20:27:05.739806] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:05.572 [2024-11-26 20:27:05.739918] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60365 ] 00:06:05.572 [2024-11-26 20:27:05.892122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.830 [2024-11-26 20:27:05.967918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.830 [2024-11-26 20:27:06.023562] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.830  [2024-11-26T20:27:06.444Z] Copying: 512/512 [B] (average 500 kBps) 00:06:06.089 00:06:06.089 20:27:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ l6g7gv19k3g5cmdx77qlc1lh32wribo90hgeyrtfn0nx17zmdlvc1z61harpfto6m3jo31ehe2jtr4zig7yy4n2bdbq52wsnv45iyqoyhd7qbxv9rqibr8dvw2s87ewczvph6sbvvwuudlihsrkts0don1uli1oftfuqlp9nb4nyr581xqftknqign064pcswvgec45wk7jfpp3ptfsqf75u9ws694tbqrzxhuf2ck46ljmqrg49xk4p1dkerc2tndllderwle7zuhj10op47dee6xa7o11nzobwvw7saku9d35n6y61peujbss21bl63mhapkxo4v5xomaa2asidggaq1f7aeekbjr58z5un4qmz4pskcbyofjwwmj024ufj021to51d72lrcaqkppiqf08sk9498y95sod48vycze2huhuyv0dvvvzxjq7ghea6wf8c8kuqir6xbh0gwb4jcf0awekjq1ftzzybp2n1082v28ettwox5vrnofdhh69 == \l\6\g\7\g\v\1\9\k\3\g\5\c\m\d\x\7\7\q\l\c\1\l\h\3\2\w\r\i\b\o\9\0\h\g\e\y\r\t\f\n\0\n\x\1\7\z\m\d\l\v\c\1\z\6\1\h\a\r\p\f\t\o\6\m\3\j\o\3\1\e\h\e\2\j\t\r\4\z\i\g\7\y\y\4\n\2\b\d\b\q\5\2\w\s\n\v\4\5\i\y\q\o\y\h\d\7\q\b\x\v\9\r\q\i\b\r\8\d\v\w\2\s\8\7\e\w\c\z\v\p\h\6\s\b\v\v\w\u\u\d\l\i\h\s\r\k\t\s\0\d\o\n\1\u\l\i\1\o\f\t\f\u\q\l\p\9\n\b\4\n\y\r\5\8\1\x\q\f\t\k\n\q\i\g\n\0\6\4\p\c\s\w\v\g\e\c\4\5\w\k\7\j\f\p\p\3\p\t\f\s\q\f\7\5\u\9\w\s\6\9\4\t\b\q\r\z\x\h\u\f\2\c\k\4\6\l\j\m\q\r\g\4\9\x\k\4\p\1\d\k\e\r\c\2\t\n\d\l\l\d\e\r\w\l\e\7\z\u\h\j\1\0\o\p\4\7\d\e\e\6\x\a\7\o\1\1\n\z\o\b\w\v\w\7\s\a\k\u\9\d\3\5\n\6\y\6\1\p\e\u\j\b\s\s\2\1\b\l\6\3\m\h\a\p\k\x\o\4\v\5\x\o\m\a\a\2\a\s\i\d\g\g\a\q\1\f\7\a\e\e\k\b\j\r\5\8\z\5\u\n\4\q\m\z\4\p\s\k\c\b\y\o\f\j\w\w\m\j\0\2\4\u\f\j\0\2\1\t\o\5\1\d\7\2\l\r\c\a\q\k\p\p\i\q\f\0\8\s\k\9\4\9\8\y\9\5\s\o\d\4\8\v\y\c\z\e\2\h\u\h\u\y\v\0\d\v\v\v\z\x\j\q\7\g\h\e\a\6\w\f\8\c\8\k\u\q\i\r\6\x\b\h\0\g\w\b\4\j\c\f\0\a\w\e\k\j\q\1\f\t\z\z\y\b\p\2\n\1\0\8\2\v\2\8\e\t\t\w\o\x\5\v\r\n\o\f\d\h\h\6\9 ]] 00:06:06.089 00:06:06.089 real 0m4.352s 00:06:06.089 user 0m2.397s 00:06:06.089 sys 0m2.137s 00:06:06.090 20:27:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.090 ************************************ 00:06:06.090 END TEST dd_flags_misc 00:06:06.090 20:27:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:06.090 ************************************ 00:06:06.090 20:27:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:06.090 20:27:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:06.090 * Second test run, disabling liburing, forcing AIO 00:06:06.090 20:27:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:06.090 20:27:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:06.090 20:27:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.090 20:27:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.090 20:27:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:06.090 ************************************ 00:06:06.090 START TEST dd_flag_append_forced_aio 00:06:06.090 ************************************ 00:06:06.090 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:06:06.090 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:06.090 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:06.090 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:06.090 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:06.090 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:06.090 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=66cmtkxedkdrxjft4lsb1xluzx655xy0 00:06:06.090 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:06.090 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:06.090 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:06.090 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=xy1dw6n70715q60s72vivpoeowvr6c09 00:06:06.090 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 66cmtkxedkdrxjft4lsb1xluzx655xy0 00:06:06.090 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s xy1dw6n70715q60s72vivpoeowvr6c09 00:06:06.090 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:06.090 [2024-11-26 20:27:06.360724] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:06.090 [2024-11-26 20:27:06.360849] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60394 ] 00:06:06.349 [2024-11-26 20:27:06.512846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.349 [2024-11-26 20:27:06.576960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.349 [2024-11-26 20:27:06.632469] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.349  [2024-11-26T20:27:06.964Z] Copying: 32/32 [B] (average 31 kBps) 00:06:06.609 00:06:06.609 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ xy1dw6n70715q60s72vivpoeowvr6c0966cmtkxedkdrxjft4lsb1xluzx655xy0 == \x\y\1\d\w\6\n\7\0\7\1\5\q\6\0\s\7\2\v\i\v\p\o\e\o\w\v\r\6\c\0\9\6\6\c\m\t\k\x\e\d\k\d\r\x\j\f\t\4\l\s\b\1\x\l\u\z\x\6\5\5\x\y\0 ]] 00:06:06.609 00:06:06.609 real 0m0.574s 00:06:06.609 user 0m0.315s 00:06:06.609 sys 0m0.140s 00:06:06.609 ************************************ 00:06:06.609 END TEST dd_flag_append_forced_aio 00:06:06.609 ************************************ 00:06:06.609 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.609 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:06.609 20:27:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:06.609 20:27:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.609 20:27:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.609 20:27:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:06.609 ************************************ 00:06:06.609 START TEST dd_flag_directory_forced_aio 00:06:06.609 ************************************ 00:06:06.609 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:06:06.609 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:06.609 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:06.609 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:06.609 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:06.609 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.609 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:06.609 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.609 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:06.609 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.609 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:06.609 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:06.609 20:27:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:06.869 [2024-11-26 20:27:06.969765] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:06.869 [2024-11-26 20:27:06.969851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60420 ] 00:06:06.869 [2024-11-26 20:27:07.110457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.869 [2024-11-26 20:27:07.184647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.128 [2024-11-26 20:27:07.237497] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.128 [2024-11-26 20:27:07.276434] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:07.128 [2024-11-26 20:27:07.276492] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:07.128 [2024-11-26 20:27:07.276512] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:07.128 [2024-11-26 20:27:07.395851] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:07.128 20:27:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:06:07.128 20:27:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:07.128 20:27:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:06:07.128 20:27:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:07.128 20:27:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:07.128 20:27:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:07.128 20:27:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:07.128 20:27:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:07.128 20:27:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:07.128 20:27:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.128 20:27:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.128 20:27:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.128 20:27:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.128 20:27:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.128 20:27:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.128 20:27:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.128 20:27:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:07.128 20:27:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:07.386 [2024-11-26 20:27:07.535843] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:07.386 [2024-11-26 20:27:07.535948] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60430 ] 00:06:07.386 [2024-11-26 20:27:07.681432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.645 [2024-11-26 20:27:07.744005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.645 [2024-11-26 20:27:07.797817] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.645 [2024-11-26 20:27:07.837053] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:07.645 [2024-11-26 20:27:07.837108] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:07.645 [2024-11-26 20:27:07.837127] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:07.645 [2024-11-26 20:27:07.955994] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:07.903 00:06:07.903 real 0m1.102s 00:06:07.903 user 0m0.611s 00:06:07.903 sys 0m0.282s 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:07.903 ************************************ 00:06:07.903 END TEST dd_flag_directory_forced_aio 00:06:07.903 ************************************ 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:07.903 ************************************ 00:06:07.903 START TEST dd_flag_nofollow_forced_aio 00:06:07.903 ************************************ 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:07.903 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:07.903 [2024-11-26 20:27:08.127870] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:07.903 [2024-11-26 20:27:08.127959] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60464 ] 00:06:08.162 [2024-11-26 20:27:08.288791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.162 [2024-11-26 20:27:08.365121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.162 [2024-11-26 20:27:08.418692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.162 [2024-11-26 20:27:08.456515] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:08.162 [2024-11-26 20:27:08.456573] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:08.162 [2024-11-26 20:27:08.456593] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:08.422 [2024-11-26 20:27:08.573535] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:08.422 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:06:08.422 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:08.422 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:06:08.422 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:08.422 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:08.422 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:08.422 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:08.422 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:08.422 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:08.422 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.422 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.422 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.422 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.422 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.422 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.422 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.422 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:08.422 20:27:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:08.422 [2024-11-26 20:27:08.696980] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:08.422 [2024-11-26 20:27:08.697094] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60472 ] 00:06:08.682 [2024-11-26 20:27:08.846784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.682 [2024-11-26 20:27:08.907011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.682 [2024-11-26 20:27:08.961144] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.682 [2024-11-26 20:27:09.001432] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:08.682 [2024-11-26 20:27:09.001480] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:08.682 [2024-11-26 20:27:09.001501] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:08.940 [2024-11-26 20:27:09.124858] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:08.940 20:27:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:06:08.940 20:27:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:08.940 20:27:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:06:08.940 20:27:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:08.940 20:27:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:08.940 20:27:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:08.940 20:27:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:08.940 20:27:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:08.940 20:27:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:08.940 20:27:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:08.940 [2024-11-26 20:27:09.254591] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:08.940 [2024-11-26 20:27:09.254723] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60481 ] 00:06:09.199 [2024-11-26 20:27:09.405935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.199 [2024-11-26 20:27:09.469243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.199 [2024-11-26 20:27:09.523084] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.456  [2024-11-26T20:27:09.811Z] Copying: 512/512 [B] (average 500 kBps) 00:06:09.456 00:06:09.457 20:27:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 9esa4zmp3x7gn8dsbacijgrbzyari8n43uro0khcyiaxk54pi21q0v5gqzre5d49n1qnclyitk6udgymsmudkyokmchgmbot8jv4wlyegagzksgbuwd5sj0cn0lhgfqoj5er0bmps6iww9qt4b7n9j4gzlh6r9gf1y7b53wlh5arsrrbzi20aidpn8xn5g2bd1kfqabpiuzpzmotiu8ut0uegz3c98uh452w0tn8wkwpa3zl2bbnumvca84asg1q9li8c10pxh0fwyr27z41xh8ysi4s609miikvpqfzjv5z4eeoq8bhzn0epil8wtjzqrbpm695zaq3rw87qr8ceatbpkq4y4ekig87usd5uzzmjk5o4u70ybj8uko4jeux6civ5m0cflwaodwi7dulty7k3ozjbl6jid1a0t4be77mrmrytfv6wd4kq7gnrfbbcxe33fy5w9w6yqy97zqhjbve0nlbf02qlywvxqzq74fy78t3gnxf4so00sg3jqjk == \9\e\s\a\4\z\m\p\3\x\7\g\n\8\d\s\b\a\c\i\j\g\r\b\z\y\a\r\i\8\n\4\3\u\r\o\0\k\h\c\y\i\a\x\k\5\4\p\i\2\1\q\0\v\5\g\q\z\r\e\5\d\4\9\n\1\q\n\c\l\y\i\t\k\6\u\d\g\y\m\s\m\u\d\k\y\o\k\m\c\h\g\m\b\o\t\8\j\v\4\w\l\y\e\g\a\g\z\k\s\g\b\u\w\d\5\s\j\0\c\n\0\l\h\g\f\q\o\j\5\e\r\0\b\m\p\s\6\i\w\w\9\q\t\4\b\7\n\9\j\4\g\z\l\h\6\r\9\g\f\1\y\7\b\5\3\w\l\h\5\a\r\s\r\r\b\z\i\2\0\a\i\d\p\n\8\x\n\5\g\2\b\d\1\k\f\q\a\b\p\i\u\z\p\z\m\o\t\i\u\8\u\t\0\u\e\g\z\3\c\9\8\u\h\4\5\2\w\0\t\n\8\w\k\w\p\a\3\z\l\2\b\b\n\u\m\v\c\a\8\4\a\s\g\1\q\9\l\i\8\c\1\0\p\x\h\0\f\w\y\r\2\7\z\4\1\x\h\8\y\s\i\4\s\6\0\9\m\i\i\k\v\p\q\f\z\j\v\5\z\4\e\e\o\q\8\b\h\z\n\0\e\p\i\l\8\w\t\j\z\q\r\b\p\m\6\9\5\z\a\q\3\r\w\8\7\q\r\8\c\e\a\t\b\p\k\q\4\y\4\e\k\i\g\8\7\u\s\d\5\u\z\z\m\j\k\5\o\4\u\7\0\y\b\j\8\u\k\o\4\j\e\u\x\6\c\i\v\5\m\0\c\f\l\w\a\o\d\w\i\7\d\u\l\t\y\7\k\3\o\z\j\b\l\6\j\i\d\1\a\0\t\4\b\e\7\7\m\r\m\r\y\t\f\v\6\w\d\4\k\q\7\g\n\r\f\b\b\c\x\e\3\3\f\y\5\w\9\w\6\y\q\y\9\7\z\q\h\j\b\v\e\0\n\l\b\f\0\2\q\l\y\w\v\x\q\z\q\7\4\f\y\7\8\t\3\g\n\x\f\4\s\o\0\0\s\g\3\j\q\j\k ]] 00:06:09.457 00:06:09.457 real 0m1.710s 00:06:09.457 user 0m0.962s 00:06:09.457 sys 0m0.412s 00:06:09.457 20:27:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.457 ************************************ 00:06:09.457 END TEST dd_flag_nofollow_forced_aio 00:06:09.457 20:27:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:09.457 ************************************ 00:06:09.457 20:27:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:09.457 20:27:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.457 20:27:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.457 20:27:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:09.715 ************************************ 00:06:09.715 START TEST dd_flag_noatime_forced_aio 00:06:09.715 ************************************ 00:06:09.715 20:27:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:06:09.715 20:27:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:09.715 20:27:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:09.715 20:27:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:09.715 20:27:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:09.715 20:27:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:09.715 20:27:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:09.715 20:27:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1732652829 00:06:09.715 20:27:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:09.715 20:27:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1732652829 00:06:09.715 20:27:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:10.649 20:27:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:10.649 [2024-11-26 20:27:10.910510] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:10.649 [2024-11-26 20:27:10.910653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60521 ] 00:06:10.909 [2024-11-26 20:27:11.074393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.909 [2024-11-26 20:27:11.140977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.909 [2024-11-26 20:27:11.198277] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.909  [2024-11-26T20:27:11.522Z] Copying: 512/512 [B] (average 500 kBps) 00:06:11.167 00:06:11.167 20:27:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:11.167 20:27:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1732652829 )) 00:06:11.167 20:27:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.167 20:27:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1732652829 )) 00:06:11.167 20:27:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.167 [2024-11-26 20:27:11.490445] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:11.167 [2024-11-26 20:27:11.490549] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60533 ] 00:06:11.426 [2024-11-26 20:27:11.632201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.426 [2024-11-26 20:27:11.693619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.426 [2024-11-26 20:27:11.747590] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.685  [2024-11-26T20:27:12.040Z] Copying: 512/512 [B] (average 500 kBps) 00:06:11.685 00:06:11.685 20:27:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:11.685 20:27:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1732652831 )) 00:06:11.685 00:06:11.685 real 0m2.173s 00:06:11.685 user 0m0.639s 00:06:11.685 sys 0m0.298s 00:06:11.685 20:27:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.685 20:27:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:11.685 ************************************ 00:06:11.685 END TEST dd_flag_noatime_forced_aio 00:06:11.685 ************************************ 00:06:11.685 20:27:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:11.685 20:27:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.685 20:27:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.685 20:27:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:11.685 ************************************ 00:06:11.685 START TEST dd_flags_misc_forced_aio 00:06:11.685 ************************************ 00:06:11.685 20:27:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:06:11.685 20:27:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:11.685 20:27:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:11.685 20:27:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:11.685 20:27:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:11.685 20:27:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:11.685 20:27:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:11.685 20:27:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:11.944 20:27:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:11.944 20:27:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:11.944 [2024-11-26 20:27:12.089991] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:11.944 [2024-11-26 20:27:12.090071] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60559 ] 00:06:11.944 [2024-11-26 20:27:12.235652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.944 [2024-11-26 20:27:12.295654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.202 [2024-11-26 20:27:12.349729] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.202  [2024-11-26T20:27:12.816Z] Copying: 512/512 [B] (average 500 kBps) 00:06:12.461 00:06:12.462 20:27:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0daxi7lyj6ws2r0mr4pveiqkf2mp0cf8z1i5cr4n2f356g3qivnhivbw241f9125rsp78oqf9weiqbyrp75dzwoanjn09vv43alcff1abqgz4fhlv1vppaua1jfx8nsz5cf03dk4uzo2ohhbn65nafwcxek4nwacbps58bij0pkritu5tu53e20vgtfhhb3zzcldl9hyphk9cqr7eixlpsb3oplvmoux42vkrc9x9iqxr4vekzain3p426fv1qso72z9oyfoveokd5b7snrh7w6p02amtld262bqi77bpufte5jnhwyw0ks1vuh9ukw5reptqvl3x31epfs1ej2yrno4vk05wrrqxqrnyb4zcrd38rkvy0vsvhx39qrdkw14tdwyvglgdq2b7heig84h3a758dub1d2xzcwp6bzw8zogpas8tsyorwljjke2snj00mmr9llqvuovnvwypd83rwu4vuy7exxmegk6my7p5vy4gihwlcjhjuxyrenbwz7t == \0\d\a\x\i\7\l\y\j\6\w\s\2\r\0\m\r\4\p\v\e\i\q\k\f\2\m\p\0\c\f\8\z\1\i\5\c\r\4\n\2\f\3\5\6\g\3\q\i\v\n\h\i\v\b\w\2\4\1\f\9\1\2\5\r\s\p\7\8\o\q\f\9\w\e\i\q\b\y\r\p\7\5\d\z\w\o\a\n\j\n\0\9\v\v\4\3\a\l\c\f\f\1\a\b\q\g\z\4\f\h\l\v\1\v\p\p\a\u\a\1\j\f\x\8\n\s\z\5\c\f\0\3\d\k\4\u\z\o\2\o\h\h\b\n\6\5\n\a\f\w\c\x\e\k\4\n\w\a\c\b\p\s\5\8\b\i\j\0\p\k\r\i\t\u\5\t\u\5\3\e\2\0\v\g\t\f\h\h\b\3\z\z\c\l\d\l\9\h\y\p\h\k\9\c\q\r\7\e\i\x\l\p\s\b\3\o\p\l\v\m\o\u\x\4\2\v\k\r\c\9\x\9\i\q\x\r\4\v\e\k\z\a\i\n\3\p\4\2\6\f\v\1\q\s\o\7\2\z\9\o\y\f\o\v\e\o\k\d\5\b\7\s\n\r\h\7\w\6\p\0\2\a\m\t\l\d\2\6\2\b\q\i\7\7\b\p\u\f\t\e\5\j\n\h\w\y\w\0\k\s\1\v\u\h\9\u\k\w\5\r\e\p\t\q\v\l\3\x\3\1\e\p\f\s\1\e\j\2\y\r\n\o\4\v\k\0\5\w\r\r\q\x\q\r\n\y\b\4\z\c\r\d\3\8\r\k\v\y\0\v\s\v\h\x\3\9\q\r\d\k\w\1\4\t\d\w\y\v\g\l\g\d\q\2\b\7\h\e\i\g\8\4\h\3\a\7\5\8\d\u\b\1\d\2\x\z\c\w\p\6\b\z\w\8\z\o\g\p\a\s\8\t\s\y\o\r\w\l\j\j\k\e\2\s\n\j\0\0\m\m\r\9\l\l\q\v\u\o\v\n\v\w\y\p\d\8\3\r\w\u\4\v\u\y\7\e\x\x\m\e\g\k\6\m\y\7\p\5\v\y\4\g\i\h\w\l\c\j\h\j\u\x\y\r\e\n\b\w\z\7\t ]] 00:06:12.462 20:27:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:12.462 20:27:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:12.462 [2024-11-26 20:27:12.637537] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:12.462 [2024-11-26 20:27:12.637624] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60567 ] 00:06:12.462 [2024-11-26 20:27:12.782999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.721 [2024-11-26 20:27:12.842929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.721 [2024-11-26 20:27:12.897013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.721  [2024-11-26T20:27:13.335Z] Copying: 512/512 [B] (average 500 kBps) 00:06:12.980 00:06:12.981 20:27:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0daxi7lyj6ws2r0mr4pveiqkf2mp0cf8z1i5cr4n2f356g3qivnhivbw241f9125rsp78oqf9weiqbyrp75dzwoanjn09vv43alcff1abqgz4fhlv1vppaua1jfx8nsz5cf03dk4uzo2ohhbn65nafwcxek4nwacbps58bij0pkritu5tu53e20vgtfhhb3zzcldl9hyphk9cqr7eixlpsb3oplvmoux42vkrc9x9iqxr4vekzain3p426fv1qso72z9oyfoveokd5b7snrh7w6p02amtld262bqi77bpufte5jnhwyw0ks1vuh9ukw5reptqvl3x31epfs1ej2yrno4vk05wrrqxqrnyb4zcrd38rkvy0vsvhx39qrdkw14tdwyvglgdq2b7heig84h3a758dub1d2xzcwp6bzw8zogpas8tsyorwljjke2snj00mmr9llqvuovnvwypd83rwu4vuy7exxmegk6my7p5vy4gihwlcjhjuxyrenbwz7t == \0\d\a\x\i\7\l\y\j\6\w\s\2\r\0\m\r\4\p\v\e\i\q\k\f\2\m\p\0\c\f\8\z\1\i\5\c\r\4\n\2\f\3\5\6\g\3\q\i\v\n\h\i\v\b\w\2\4\1\f\9\1\2\5\r\s\p\7\8\o\q\f\9\w\e\i\q\b\y\r\p\7\5\d\z\w\o\a\n\j\n\0\9\v\v\4\3\a\l\c\f\f\1\a\b\q\g\z\4\f\h\l\v\1\v\p\p\a\u\a\1\j\f\x\8\n\s\z\5\c\f\0\3\d\k\4\u\z\o\2\o\h\h\b\n\6\5\n\a\f\w\c\x\e\k\4\n\w\a\c\b\p\s\5\8\b\i\j\0\p\k\r\i\t\u\5\t\u\5\3\e\2\0\v\g\t\f\h\h\b\3\z\z\c\l\d\l\9\h\y\p\h\k\9\c\q\r\7\e\i\x\l\p\s\b\3\o\p\l\v\m\o\u\x\4\2\v\k\r\c\9\x\9\i\q\x\r\4\v\e\k\z\a\i\n\3\p\4\2\6\f\v\1\q\s\o\7\2\z\9\o\y\f\o\v\e\o\k\d\5\b\7\s\n\r\h\7\w\6\p\0\2\a\m\t\l\d\2\6\2\b\q\i\7\7\b\p\u\f\t\e\5\j\n\h\w\y\w\0\k\s\1\v\u\h\9\u\k\w\5\r\e\p\t\q\v\l\3\x\3\1\e\p\f\s\1\e\j\2\y\r\n\o\4\v\k\0\5\w\r\r\q\x\q\r\n\y\b\4\z\c\r\d\3\8\r\k\v\y\0\v\s\v\h\x\3\9\q\r\d\k\w\1\4\t\d\w\y\v\g\l\g\d\q\2\b\7\h\e\i\g\8\4\h\3\a\7\5\8\d\u\b\1\d\2\x\z\c\w\p\6\b\z\w\8\z\o\g\p\a\s\8\t\s\y\o\r\w\l\j\j\k\e\2\s\n\j\0\0\m\m\r\9\l\l\q\v\u\o\v\n\v\w\y\p\d\8\3\r\w\u\4\v\u\y\7\e\x\x\m\e\g\k\6\m\y\7\p\5\v\y\4\g\i\h\w\l\c\j\h\j\u\x\y\r\e\n\b\w\z\7\t ]] 00:06:12.981 20:27:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:12.981 20:27:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:12.981 [2024-11-26 20:27:13.189483] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:12.981 [2024-11-26 20:27:13.189600] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60580 ] 00:06:13.240 [2024-11-26 20:27:13.339069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.240 [2024-11-26 20:27:13.398625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.240 [2024-11-26 20:27:13.452410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.240  [2024-11-26T20:27:13.854Z] Copying: 512/512 [B] (average 250 kBps) 00:06:13.499 00:06:13.499 20:27:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0daxi7lyj6ws2r0mr4pveiqkf2mp0cf8z1i5cr4n2f356g3qivnhivbw241f9125rsp78oqf9weiqbyrp75dzwoanjn09vv43alcff1abqgz4fhlv1vppaua1jfx8nsz5cf03dk4uzo2ohhbn65nafwcxek4nwacbps58bij0pkritu5tu53e20vgtfhhb3zzcldl9hyphk9cqr7eixlpsb3oplvmoux42vkrc9x9iqxr4vekzain3p426fv1qso72z9oyfoveokd5b7snrh7w6p02amtld262bqi77bpufte5jnhwyw0ks1vuh9ukw5reptqvl3x31epfs1ej2yrno4vk05wrrqxqrnyb4zcrd38rkvy0vsvhx39qrdkw14tdwyvglgdq2b7heig84h3a758dub1d2xzcwp6bzw8zogpas8tsyorwljjke2snj00mmr9llqvuovnvwypd83rwu4vuy7exxmegk6my7p5vy4gihwlcjhjuxyrenbwz7t == \0\d\a\x\i\7\l\y\j\6\w\s\2\r\0\m\r\4\p\v\e\i\q\k\f\2\m\p\0\c\f\8\z\1\i\5\c\r\4\n\2\f\3\5\6\g\3\q\i\v\n\h\i\v\b\w\2\4\1\f\9\1\2\5\r\s\p\7\8\o\q\f\9\w\e\i\q\b\y\r\p\7\5\d\z\w\o\a\n\j\n\0\9\v\v\4\3\a\l\c\f\f\1\a\b\q\g\z\4\f\h\l\v\1\v\p\p\a\u\a\1\j\f\x\8\n\s\z\5\c\f\0\3\d\k\4\u\z\o\2\o\h\h\b\n\6\5\n\a\f\w\c\x\e\k\4\n\w\a\c\b\p\s\5\8\b\i\j\0\p\k\r\i\t\u\5\t\u\5\3\e\2\0\v\g\t\f\h\h\b\3\z\z\c\l\d\l\9\h\y\p\h\k\9\c\q\r\7\e\i\x\l\p\s\b\3\o\p\l\v\m\o\u\x\4\2\v\k\r\c\9\x\9\i\q\x\r\4\v\e\k\z\a\i\n\3\p\4\2\6\f\v\1\q\s\o\7\2\z\9\o\y\f\o\v\e\o\k\d\5\b\7\s\n\r\h\7\w\6\p\0\2\a\m\t\l\d\2\6\2\b\q\i\7\7\b\p\u\f\t\e\5\j\n\h\w\y\w\0\k\s\1\v\u\h\9\u\k\w\5\r\e\p\t\q\v\l\3\x\3\1\e\p\f\s\1\e\j\2\y\r\n\o\4\v\k\0\5\w\r\r\q\x\q\r\n\y\b\4\z\c\r\d\3\8\r\k\v\y\0\v\s\v\h\x\3\9\q\r\d\k\w\1\4\t\d\w\y\v\g\l\g\d\q\2\b\7\h\e\i\g\8\4\h\3\a\7\5\8\d\u\b\1\d\2\x\z\c\w\p\6\b\z\w\8\z\o\g\p\a\s\8\t\s\y\o\r\w\l\j\j\k\e\2\s\n\j\0\0\m\m\r\9\l\l\q\v\u\o\v\n\v\w\y\p\d\8\3\r\w\u\4\v\u\y\7\e\x\x\m\e\g\k\6\m\y\7\p\5\v\y\4\g\i\h\w\l\c\j\h\j\u\x\y\r\e\n\b\w\z\7\t ]] 00:06:13.499 20:27:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:13.499 20:27:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:13.499 [2024-11-26 20:27:13.740475] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:13.499 [2024-11-26 20:27:13.740564] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60582 ] 00:06:13.758 [2024-11-26 20:27:13.881835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.758 [2024-11-26 20:27:13.942715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.758 [2024-11-26 20:27:13.996652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.758  [2024-11-26T20:27:14.372Z] Copying: 512/512 [B] (average 500 kBps) 00:06:14.017 00:06:14.017 20:27:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0daxi7lyj6ws2r0mr4pveiqkf2mp0cf8z1i5cr4n2f356g3qivnhivbw241f9125rsp78oqf9weiqbyrp75dzwoanjn09vv43alcff1abqgz4fhlv1vppaua1jfx8nsz5cf03dk4uzo2ohhbn65nafwcxek4nwacbps58bij0pkritu5tu53e20vgtfhhb3zzcldl9hyphk9cqr7eixlpsb3oplvmoux42vkrc9x9iqxr4vekzain3p426fv1qso72z9oyfoveokd5b7snrh7w6p02amtld262bqi77bpufte5jnhwyw0ks1vuh9ukw5reptqvl3x31epfs1ej2yrno4vk05wrrqxqrnyb4zcrd38rkvy0vsvhx39qrdkw14tdwyvglgdq2b7heig84h3a758dub1d2xzcwp6bzw8zogpas8tsyorwljjke2snj00mmr9llqvuovnvwypd83rwu4vuy7exxmegk6my7p5vy4gihwlcjhjuxyrenbwz7t == \0\d\a\x\i\7\l\y\j\6\w\s\2\r\0\m\r\4\p\v\e\i\q\k\f\2\m\p\0\c\f\8\z\1\i\5\c\r\4\n\2\f\3\5\6\g\3\q\i\v\n\h\i\v\b\w\2\4\1\f\9\1\2\5\r\s\p\7\8\o\q\f\9\w\e\i\q\b\y\r\p\7\5\d\z\w\o\a\n\j\n\0\9\v\v\4\3\a\l\c\f\f\1\a\b\q\g\z\4\f\h\l\v\1\v\p\p\a\u\a\1\j\f\x\8\n\s\z\5\c\f\0\3\d\k\4\u\z\o\2\o\h\h\b\n\6\5\n\a\f\w\c\x\e\k\4\n\w\a\c\b\p\s\5\8\b\i\j\0\p\k\r\i\t\u\5\t\u\5\3\e\2\0\v\g\t\f\h\h\b\3\z\z\c\l\d\l\9\h\y\p\h\k\9\c\q\r\7\e\i\x\l\p\s\b\3\o\p\l\v\m\o\u\x\4\2\v\k\r\c\9\x\9\i\q\x\r\4\v\e\k\z\a\i\n\3\p\4\2\6\f\v\1\q\s\o\7\2\z\9\o\y\f\o\v\e\o\k\d\5\b\7\s\n\r\h\7\w\6\p\0\2\a\m\t\l\d\2\6\2\b\q\i\7\7\b\p\u\f\t\e\5\j\n\h\w\y\w\0\k\s\1\v\u\h\9\u\k\w\5\r\e\p\t\q\v\l\3\x\3\1\e\p\f\s\1\e\j\2\y\r\n\o\4\v\k\0\5\w\r\r\q\x\q\r\n\y\b\4\z\c\r\d\3\8\r\k\v\y\0\v\s\v\h\x\3\9\q\r\d\k\w\1\4\t\d\w\y\v\g\l\g\d\q\2\b\7\h\e\i\g\8\4\h\3\a\7\5\8\d\u\b\1\d\2\x\z\c\w\p\6\b\z\w\8\z\o\g\p\a\s\8\t\s\y\o\r\w\l\j\j\k\e\2\s\n\j\0\0\m\m\r\9\l\l\q\v\u\o\v\n\v\w\y\p\d\8\3\r\w\u\4\v\u\y\7\e\x\x\m\e\g\k\6\m\y\7\p\5\v\y\4\g\i\h\w\l\c\j\h\j\u\x\y\r\e\n\b\w\z\7\t ]] 00:06:14.017 20:27:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:14.017 20:27:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:14.017 20:27:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:14.017 20:27:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:14.017 20:27:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:14.017 20:27:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:14.017 [2024-11-26 20:27:14.295985] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:14.017 [2024-11-26 20:27:14.296067] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60595 ] 00:06:14.276 [2024-11-26 20:27:14.439791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.276 [2024-11-26 20:27:14.499460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.276 [2024-11-26 20:27:14.554448] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.276  [2024-11-26T20:27:14.890Z] Copying: 512/512 [B] (average 500 kBps) 00:06:14.535 00:06:14.535 20:27:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ pm387cepsiftlvwj48jmz44apmp44w3u08oavt2w2dg3mxl1asudstxenhh1qzr92box7xzsswezutz1zheg2k23uzib3ypg128f54zl08m3ok84aqvbin2p1usb9u6fvcbvx4gnqqbckft35687ngdyvqcnuwo7fhh0d9zlz1n4osx8nhs9frmc2s3nlasf4avp401d6rrwjom0cjown853g0b3xnzpxvm6uc2jyejr39jhivyrewhim73jnuc0t7matbsrpqbf7rpba83md8c5fedh92jog27us43i7qyk6llq6p2hm7octirxw2bvww1y4ujcp5ytsuxd3ylr6bjp8pqcu0p4yo38c08vjq6y9j0aza8bro7sa8qqi4gbm10xelrw7spejq7k695ue1b3ua7xgyvveut922nzc7ssk9ofx8ytqwhcr1sjytw4ut81ebp4e2cgmnh7jer5fe3nsk9p36ll149ey9qzjs478aw3jeyywnm3g24meslq == \p\m\3\8\7\c\e\p\s\i\f\t\l\v\w\j\4\8\j\m\z\4\4\a\p\m\p\4\4\w\3\u\0\8\o\a\v\t\2\w\2\d\g\3\m\x\l\1\a\s\u\d\s\t\x\e\n\h\h\1\q\z\r\9\2\b\o\x\7\x\z\s\s\w\e\z\u\t\z\1\z\h\e\g\2\k\2\3\u\z\i\b\3\y\p\g\1\2\8\f\5\4\z\l\0\8\m\3\o\k\8\4\a\q\v\b\i\n\2\p\1\u\s\b\9\u\6\f\v\c\b\v\x\4\g\n\q\q\b\c\k\f\t\3\5\6\8\7\n\g\d\y\v\q\c\n\u\w\o\7\f\h\h\0\d\9\z\l\z\1\n\4\o\s\x\8\n\h\s\9\f\r\m\c\2\s\3\n\l\a\s\f\4\a\v\p\4\0\1\d\6\r\r\w\j\o\m\0\c\j\o\w\n\8\5\3\g\0\b\3\x\n\z\p\x\v\m\6\u\c\2\j\y\e\j\r\3\9\j\h\i\v\y\r\e\w\h\i\m\7\3\j\n\u\c\0\t\7\m\a\t\b\s\r\p\q\b\f\7\r\p\b\a\8\3\m\d\8\c\5\f\e\d\h\9\2\j\o\g\2\7\u\s\4\3\i\7\q\y\k\6\l\l\q\6\p\2\h\m\7\o\c\t\i\r\x\w\2\b\v\w\w\1\y\4\u\j\c\p\5\y\t\s\u\x\d\3\y\l\r\6\b\j\p\8\p\q\c\u\0\p\4\y\o\3\8\c\0\8\v\j\q\6\y\9\j\0\a\z\a\8\b\r\o\7\s\a\8\q\q\i\4\g\b\m\1\0\x\e\l\r\w\7\s\p\e\j\q\7\k\6\9\5\u\e\1\b\3\u\a\7\x\g\y\v\v\e\u\t\9\2\2\n\z\c\7\s\s\k\9\o\f\x\8\y\t\q\w\h\c\r\1\s\j\y\t\w\4\u\t\8\1\e\b\p\4\e\2\c\g\m\n\h\7\j\e\r\5\f\e\3\n\s\k\9\p\3\6\l\l\1\4\9\e\y\9\q\z\j\s\4\7\8\a\w\3\j\e\y\y\w\n\m\3\g\2\4\m\e\s\l\q ]] 00:06:14.535 20:27:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:14.535 20:27:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:14.535 [2024-11-26 20:27:14.847676] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:14.535 [2024-11-26 20:27:14.847789] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60597 ] 00:06:14.794 [2024-11-26 20:27:14.995772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.794 [2024-11-26 20:27:15.052833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.794 [2024-11-26 20:27:15.106349] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.794  [2024-11-26T20:27:15.409Z] Copying: 512/512 [B] (average 500 kBps) 00:06:15.054 00:06:15.054 20:27:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ pm387cepsiftlvwj48jmz44apmp44w3u08oavt2w2dg3mxl1asudstxenhh1qzr92box7xzsswezutz1zheg2k23uzib3ypg128f54zl08m3ok84aqvbin2p1usb9u6fvcbvx4gnqqbckft35687ngdyvqcnuwo7fhh0d9zlz1n4osx8nhs9frmc2s3nlasf4avp401d6rrwjom0cjown853g0b3xnzpxvm6uc2jyejr39jhivyrewhim73jnuc0t7matbsrpqbf7rpba83md8c5fedh92jog27us43i7qyk6llq6p2hm7octirxw2bvww1y4ujcp5ytsuxd3ylr6bjp8pqcu0p4yo38c08vjq6y9j0aza8bro7sa8qqi4gbm10xelrw7spejq7k695ue1b3ua7xgyvveut922nzc7ssk9ofx8ytqwhcr1sjytw4ut81ebp4e2cgmnh7jer5fe3nsk9p36ll149ey9qzjs478aw3jeyywnm3g24meslq == \p\m\3\8\7\c\e\p\s\i\f\t\l\v\w\j\4\8\j\m\z\4\4\a\p\m\p\4\4\w\3\u\0\8\o\a\v\t\2\w\2\d\g\3\m\x\l\1\a\s\u\d\s\t\x\e\n\h\h\1\q\z\r\9\2\b\o\x\7\x\z\s\s\w\e\z\u\t\z\1\z\h\e\g\2\k\2\3\u\z\i\b\3\y\p\g\1\2\8\f\5\4\z\l\0\8\m\3\o\k\8\4\a\q\v\b\i\n\2\p\1\u\s\b\9\u\6\f\v\c\b\v\x\4\g\n\q\q\b\c\k\f\t\3\5\6\8\7\n\g\d\y\v\q\c\n\u\w\o\7\f\h\h\0\d\9\z\l\z\1\n\4\o\s\x\8\n\h\s\9\f\r\m\c\2\s\3\n\l\a\s\f\4\a\v\p\4\0\1\d\6\r\r\w\j\o\m\0\c\j\o\w\n\8\5\3\g\0\b\3\x\n\z\p\x\v\m\6\u\c\2\j\y\e\j\r\3\9\j\h\i\v\y\r\e\w\h\i\m\7\3\j\n\u\c\0\t\7\m\a\t\b\s\r\p\q\b\f\7\r\p\b\a\8\3\m\d\8\c\5\f\e\d\h\9\2\j\o\g\2\7\u\s\4\3\i\7\q\y\k\6\l\l\q\6\p\2\h\m\7\o\c\t\i\r\x\w\2\b\v\w\w\1\y\4\u\j\c\p\5\y\t\s\u\x\d\3\y\l\r\6\b\j\p\8\p\q\c\u\0\p\4\y\o\3\8\c\0\8\v\j\q\6\y\9\j\0\a\z\a\8\b\r\o\7\s\a\8\q\q\i\4\g\b\m\1\0\x\e\l\r\w\7\s\p\e\j\q\7\k\6\9\5\u\e\1\b\3\u\a\7\x\g\y\v\v\e\u\t\9\2\2\n\z\c\7\s\s\k\9\o\f\x\8\y\t\q\w\h\c\r\1\s\j\y\t\w\4\u\t\8\1\e\b\p\4\e\2\c\g\m\n\h\7\j\e\r\5\f\e\3\n\s\k\9\p\3\6\l\l\1\4\9\e\y\9\q\z\j\s\4\7\8\a\w\3\j\e\y\y\w\n\m\3\g\2\4\m\e\s\l\q ]] 00:06:15.054 20:27:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:15.054 20:27:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:15.054 [2024-11-26 20:27:15.405923] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:15.054 [2024-11-26 20:27:15.406536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60610 ] 00:06:15.313 [2024-11-26 20:27:15.555857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.313 [2024-11-26 20:27:15.619265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.572 [2024-11-26 20:27:15.675186] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.572  [2024-11-26T20:27:15.927Z] Copying: 512/512 [B] (average 250 kBps) 00:06:15.572 00:06:15.572 20:27:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ pm387cepsiftlvwj48jmz44apmp44w3u08oavt2w2dg3mxl1asudstxenhh1qzr92box7xzsswezutz1zheg2k23uzib3ypg128f54zl08m3ok84aqvbin2p1usb9u6fvcbvx4gnqqbckft35687ngdyvqcnuwo7fhh0d9zlz1n4osx8nhs9frmc2s3nlasf4avp401d6rrwjom0cjown853g0b3xnzpxvm6uc2jyejr39jhivyrewhim73jnuc0t7matbsrpqbf7rpba83md8c5fedh92jog27us43i7qyk6llq6p2hm7octirxw2bvww1y4ujcp5ytsuxd3ylr6bjp8pqcu0p4yo38c08vjq6y9j0aza8bro7sa8qqi4gbm10xelrw7spejq7k695ue1b3ua7xgyvveut922nzc7ssk9ofx8ytqwhcr1sjytw4ut81ebp4e2cgmnh7jer5fe3nsk9p36ll149ey9qzjs478aw3jeyywnm3g24meslq == \p\m\3\8\7\c\e\p\s\i\f\t\l\v\w\j\4\8\j\m\z\4\4\a\p\m\p\4\4\w\3\u\0\8\o\a\v\t\2\w\2\d\g\3\m\x\l\1\a\s\u\d\s\t\x\e\n\h\h\1\q\z\r\9\2\b\o\x\7\x\z\s\s\w\e\z\u\t\z\1\z\h\e\g\2\k\2\3\u\z\i\b\3\y\p\g\1\2\8\f\5\4\z\l\0\8\m\3\o\k\8\4\a\q\v\b\i\n\2\p\1\u\s\b\9\u\6\f\v\c\b\v\x\4\g\n\q\q\b\c\k\f\t\3\5\6\8\7\n\g\d\y\v\q\c\n\u\w\o\7\f\h\h\0\d\9\z\l\z\1\n\4\o\s\x\8\n\h\s\9\f\r\m\c\2\s\3\n\l\a\s\f\4\a\v\p\4\0\1\d\6\r\r\w\j\o\m\0\c\j\o\w\n\8\5\3\g\0\b\3\x\n\z\p\x\v\m\6\u\c\2\j\y\e\j\r\3\9\j\h\i\v\y\r\e\w\h\i\m\7\3\j\n\u\c\0\t\7\m\a\t\b\s\r\p\q\b\f\7\r\p\b\a\8\3\m\d\8\c\5\f\e\d\h\9\2\j\o\g\2\7\u\s\4\3\i\7\q\y\k\6\l\l\q\6\p\2\h\m\7\o\c\t\i\r\x\w\2\b\v\w\w\1\y\4\u\j\c\p\5\y\t\s\u\x\d\3\y\l\r\6\b\j\p\8\p\q\c\u\0\p\4\y\o\3\8\c\0\8\v\j\q\6\y\9\j\0\a\z\a\8\b\r\o\7\s\a\8\q\q\i\4\g\b\m\1\0\x\e\l\r\w\7\s\p\e\j\q\7\k\6\9\5\u\e\1\b\3\u\a\7\x\g\y\v\v\e\u\t\9\2\2\n\z\c\7\s\s\k\9\o\f\x\8\y\t\q\w\h\c\r\1\s\j\y\t\w\4\u\t\8\1\e\b\p\4\e\2\c\g\m\n\h\7\j\e\r\5\f\e\3\n\s\k\9\p\3\6\l\l\1\4\9\e\y\9\q\z\j\s\4\7\8\a\w\3\j\e\y\y\w\n\m\3\g\2\4\m\e\s\l\q ]] 00:06:15.572 20:27:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:15.572 20:27:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:15.832 [2024-11-26 20:27:15.977662] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:15.832 [2024-11-26 20:27:15.977757] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60612 ] 00:06:15.832 [2024-11-26 20:27:16.124621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.832 [2024-11-26 20:27:16.185535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.091 [2024-11-26 20:27:16.240902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.091  [2024-11-26T20:27:16.705Z] Copying: 512/512 [B] (average 500 kBps) 00:06:16.350 00:06:16.350 20:27:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ pm387cepsiftlvwj48jmz44apmp44w3u08oavt2w2dg3mxl1asudstxenhh1qzr92box7xzsswezutz1zheg2k23uzib3ypg128f54zl08m3ok84aqvbin2p1usb9u6fvcbvx4gnqqbckft35687ngdyvqcnuwo7fhh0d9zlz1n4osx8nhs9frmc2s3nlasf4avp401d6rrwjom0cjown853g0b3xnzpxvm6uc2jyejr39jhivyrewhim73jnuc0t7matbsrpqbf7rpba83md8c5fedh92jog27us43i7qyk6llq6p2hm7octirxw2bvww1y4ujcp5ytsuxd3ylr6bjp8pqcu0p4yo38c08vjq6y9j0aza8bro7sa8qqi4gbm10xelrw7spejq7k695ue1b3ua7xgyvveut922nzc7ssk9ofx8ytqwhcr1sjytw4ut81ebp4e2cgmnh7jer5fe3nsk9p36ll149ey9qzjs478aw3jeyywnm3g24meslq == \p\m\3\8\7\c\e\p\s\i\f\t\l\v\w\j\4\8\j\m\z\4\4\a\p\m\p\4\4\w\3\u\0\8\o\a\v\t\2\w\2\d\g\3\m\x\l\1\a\s\u\d\s\t\x\e\n\h\h\1\q\z\r\9\2\b\o\x\7\x\z\s\s\w\e\z\u\t\z\1\z\h\e\g\2\k\2\3\u\z\i\b\3\y\p\g\1\2\8\f\5\4\z\l\0\8\m\3\o\k\8\4\a\q\v\b\i\n\2\p\1\u\s\b\9\u\6\f\v\c\b\v\x\4\g\n\q\q\b\c\k\f\t\3\5\6\8\7\n\g\d\y\v\q\c\n\u\w\o\7\f\h\h\0\d\9\z\l\z\1\n\4\o\s\x\8\n\h\s\9\f\r\m\c\2\s\3\n\l\a\s\f\4\a\v\p\4\0\1\d\6\r\r\w\j\o\m\0\c\j\o\w\n\8\5\3\g\0\b\3\x\n\z\p\x\v\m\6\u\c\2\j\y\e\j\r\3\9\j\h\i\v\y\r\e\w\h\i\m\7\3\j\n\u\c\0\t\7\m\a\t\b\s\r\p\q\b\f\7\r\p\b\a\8\3\m\d\8\c\5\f\e\d\h\9\2\j\o\g\2\7\u\s\4\3\i\7\q\y\k\6\l\l\q\6\p\2\h\m\7\o\c\t\i\r\x\w\2\b\v\w\w\1\y\4\u\j\c\p\5\y\t\s\u\x\d\3\y\l\r\6\b\j\p\8\p\q\c\u\0\p\4\y\o\3\8\c\0\8\v\j\q\6\y\9\j\0\a\z\a\8\b\r\o\7\s\a\8\q\q\i\4\g\b\m\1\0\x\e\l\r\w\7\s\p\e\j\q\7\k\6\9\5\u\e\1\b\3\u\a\7\x\g\y\v\v\e\u\t\9\2\2\n\z\c\7\s\s\k\9\o\f\x\8\y\t\q\w\h\c\r\1\s\j\y\t\w\4\u\t\8\1\e\b\p\4\e\2\c\g\m\n\h\7\j\e\r\5\f\e\3\n\s\k\9\p\3\6\l\l\1\4\9\e\y\9\q\z\j\s\4\7\8\a\w\3\j\e\y\y\w\n\m\3\g\2\4\m\e\s\l\q ]] 00:06:16.350 00:06:16.350 real 0m4.458s 00:06:16.350 user 0m2.430s 00:06:16.350 sys 0m1.078s 00:06:16.350 ************************************ 00:06:16.350 END TEST dd_flags_misc_forced_aio 00:06:16.350 ************************************ 00:06:16.350 20:27:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.350 20:27:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:16.350 20:27:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:16.350 20:27:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:16.350 20:27:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:16.350 ************************************ 00:06:16.350 END TEST spdk_dd_posix 00:06:16.350 ************************************ 00:06:16.350 00:06:16.350 real 0m20.508s 00:06:16.350 user 0m10.092s 00:06:16.350 sys 0m6.398s 00:06:16.350 20:27:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.350 20:27:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:16.350 20:27:16 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:16.350 20:27:16 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.350 20:27:16 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.350 20:27:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:16.350 ************************************ 00:06:16.350 START TEST spdk_dd_malloc 00:06:16.350 ************************************ 00:06:16.350 20:27:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:16.350 * Looking for test storage... 00:06:16.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:16.350 20:27:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.350 20:27:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.350 20:27:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:16.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.609 --rc genhtml_branch_coverage=1 00:06:16.609 --rc genhtml_function_coverage=1 00:06:16.609 --rc genhtml_legend=1 00:06:16.609 --rc geninfo_all_blocks=1 00:06:16.609 --rc geninfo_unexecuted_blocks=1 00:06:16.609 00:06:16.609 ' 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:16.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.609 --rc genhtml_branch_coverage=1 00:06:16.609 --rc genhtml_function_coverage=1 00:06:16.609 --rc genhtml_legend=1 00:06:16.609 --rc geninfo_all_blocks=1 00:06:16.609 --rc geninfo_unexecuted_blocks=1 00:06:16.609 00:06:16.609 ' 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:16.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.609 --rc genhtml_branch_coverage=1 00:06:16.609 --rc genhtml_function_coverage=1 00:06:16.609 --rc genhtml_legend=1 00:06:16.609 --rc geninfo_all_blocks=1 00:06:16.609 --rc geninfo_unexecuted_blocks=1 00:06:16.609 00:06:16.609 ' 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:16.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.609 --rc genhtml_branch_coverage=1 00:06:16.609 --rc genhtml_function_coverage=1 00:06:16.609 --rc genhtml_legend=1 00:06:16.609 --rc geninfo_all_blocks=1 00:06:16.609 --rc geninfo_unexecuted_blocks=1 00:06:16.609 00:06:16.609 ' 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.609 20:27:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:16.610 ************************************ 00:06:16.610 START TEST dd_malloc_copy 00:06:16.610 ************************************ 00:06:16.610 20:27:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:06:16.610 20:27:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:16.610 20:27:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:16.610 20:27:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:16.610 20:27:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:16.610 20:27:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:16.610 20:27:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:16.610 20:27:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:16.610 20:27:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:16.610 20:27:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:16.610 20:27:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:16.610 [2024-11-26 20:27:16.846780] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:16.610 [2024-11-26 20:27:16.846924] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60694 ] 00:06:16.610 { 00:06:16.610 "subsystems": [ 00:06:16.610 { 00:06:16.610 "subsystem": "bdev", 00:06:16.610 "config": [ 00:06:16.610 { 00:06:16.610 "params": { 00:06:16.610 "block_size": 512, 00:06:16.610 "num_blocks": 1048576, 00:06:16.610 "name": "malloc0" 00:06:16.610 }, 00:06:16.610 "method": "bdev_malloc_create" 00:06:16.610 }, 00:06:16.610 { 00:06:16.610 "params": { 00:06:16.610 "block_size": 512, 00:06:16.610 "num_blocks": 1048576, 00:06:16.610 "name": "malloc1" 00:06:16.610 }, 00:06:16.610 "method": "bdev_malloc_create" 00:06:16.610 }, 00:06:16.610 { 00:06:16.610 "method": "bdev_wait_for_examine" 00:06:16.610 } 00:06:16.610 ] 00:06:16.610 } 00:06:16.610 ] 00:06:16.610 } 00:06:16.869 [2024-11-26 20:27:16.995070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.869 [2024-11-26 20:27:17.056447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.869 [2024-11-26 20:27:17.111057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.246  [2024-11-26T20:27:19.536Z] Copying: 201/512 [MB] (201 MBps) [2024-11-26T20:27:20.104Z] Copying: 401/512 [MB] (200 MBps) [2024-11-26T20:27:20.671Z] Copying: 512/512 [MB] (average 201 MBps) 00:06:20.316 00:06:20.316 20:27:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:20.316 20:27:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:20.316 20:27:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:20.316 20:27:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:20.316 [2024-11-26 20:27:20.622688] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:20.316 [2024-11-26 20:27:20.622777] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60747 ] 00:06:20.317 { 00:06:20.317 "subsystems": [ 00:06:20.317 { 00:06:20.317 "subsystem": "bdev", 00:06:20.317 "config": [ 00:06:20.317 { 00:06:20.317 "params": { 00:06:20.317 "block_size": 512, 00:06:20.317 "num_blocks": 1048576, 00:06:20.317 "name": "malloc0" 00:06:20.317 }, 00:06:20.317 "method": "bdev_malloc_create" 00:06:20.317 }, 00:06:20.317 { 00:06:20.317 "params": { 00:06:20.317 "block_size": 512, 00:06:20.317 "num_blocks": 1048576, 00:06:20.317 "name": "malloc1" 00:06:20.317 }, 00:06:20.317 "method": "bdev_malloc_create" 00:06:20.317 }, 00:06:20.317 { 00:06:20.317 "method": "bdev_wait_for_examine" 00:06:20.317 } 00:06:20.317 ] 00:06:20.317 } 00:06:20.317 ] 00:06:20.317 } 00:06:20.575 [2024-11-26 20:27:20.763693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.575 [2024-11-26 20:27:20.822042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.575 [2024-11-26 20:27:20.876538] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.949  [2024-11-26T20:27:23.242Z] Copying: 196/512 [MB] (196 MBps) [2024-11-26T20:27:24.177Z] Copying: 390/512 [MB] (193 MBps) [2024-11-26T20:27:24.748Z] Copying: 512/512 [MB] (average 192 MBps) 00:06:24.393 00:06:24.393 00:06:24.393 real 0m7.700s 00:06:24.393 user 0m6.717s 00:06:24.393 sys 0m0.829s 00:06:24.393 20:27:24 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.393 ************************************ 00:06:24.393 20:27:24 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:24.393 END TEST dd_malloc_copy 00:06:24.393 ************************************ 00:06:24.393 00:06:24.393 real 0m7.933s 00:06:24.393 user 0m6.841s 00:06:24.393 sys 0m0.941s 00:06:24.393 20:27:24 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.393 20:27:24 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:24.393 ************************************ 00:06:24.393 END TEST spdk_dd_malloc 00:06:24.393 ************************************ 00:06:24.393 20:27:24 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:24.393 20:27:24 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:24.393 20:27:24 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.393 20:27:24 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:24.393 ************************************ 00:06:24.393 START TEST spdk_dd_bdev_to_bdev 00:06:24.393 ************************************ 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:24.393 * Looking for test storage... 00:06:24.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.393 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.394 --rc genhtml_branch_coverage=1 00:06:24.394 --rc genhtml_function_coverage=1 00:06:24.394 --rc genhtml_legend=1 00:06:24.394 --rc geninfo_all_blocks=1 00:06:24.394 --rc geninfo_unexecuted_blocks=1 00:06:24.394 00:06:24.394 ' 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.394 --rc genhtml_branch_coverage=1 00:06:24.394 --rc genhtml_function_coverage=1 00:06:24.394 --rc genhtml_legend=1 00:06:24.394 --rc geninfo_all_blocks=1 00:06:24.394 --rc geninfo_unexecuted_blocks=1 00:06:24.394 00:06:24.394 ' 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.394 --rc genhtml_branch_coverage=1 00:06:24.394 --rc genhtml_function_coverage=1 00:06:24.394 --rc genhtml_legend=1 00:06:24.394 --rc geninfo_all_blocks=1 00:06:24.394 --rc geninfo_unexecuted_blocks=1 00:06:24.394 00:06:24.394 ' 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.394 --rc genhtml_branch_coverage=1 00:06:24.394 --rc genhtml_function_coverage=1 00:06:24.394 --rc genhtml_legend=1 00:06:24.394 --rc geninfo_all_blocks=1 00:06:24.394 --rc geninfo_unexecuted_blocks=1 00:06:24.394 00:06:24.394 ' 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.394 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:24.653 ************************************ 00:06:24.653 START TEST dd_inflate_file 00:06:24.653 ************************************ 00:06:24.653 20:27:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:24.653 [2024-11-26 20:27:24.797414] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:24.653 [2024-11-26 20:27:24.797504] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60865 ] 00:06:24.653 [2024-11-26 20:27:24.937572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.653 [2024-11-26 20:27:24.995855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.911 [2024-11-26 20:27:25.049407] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.911  [2024-11-26T20:27:25.524Z] Copying: 64/64 [MB] (average 1560 MBps) 00:06:25.169 00:06:25.169 00:06:25.169 real 0m0.563s 00:06:25.169 user 0m0.322s 00:06:25.169 sys 0m0.294s 00:06:25.169 20:27:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.169 20:27:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:25.169 ************************************ 00:06:25.169 END TEST dd_inflate_file 00:06:25.169 ************************************ 00:06:25.169 20:27:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:25.169 20:27:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:25.169 20:27:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:25.169 20:27:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:25.169 20:27:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:25.169 20:27:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:25.169 20:27:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:25.169 20:27:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.169 20:27:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:25.169 ************************************ 00:06:25.169 START TEST dd_copy_to_out_bdev 00:06:25.169 ************************************ 00:06:25.169 20:27:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:25.169 { 00:06:25.169 "subsystems": [ 00:06:25.169 { 00:06:25.169 "subsystem": "bdev", 00:06:25.169 "config": [ 00:06:25.169 { 00:06:25.169 "params": { 00:06:25.169 "trtype": "pcie", 00:06:25.169 "traddr": "0000:00:10.0", 00:06:25.169 "name": "Nvme0" 00:06:25.169 }, 00:06:25.169 "method": "bdev_nvme_attach_controller" 00:06:25.169 }, 00:06:25.169 { 00:06:25.169 "params": { 00:06:25.169 "trtype": "pcie", 00:06:25.169 "traddr": "0000:00:11.0", 00:06:25.169 "name": "Nvme1" 00:06:25.169 }, 00:06:25.169 "method": "bdev_nvme_attach_controller" 00:06:25.169 }, 00:06:25.169 { 00:06:25.169 "method": "bdev_wait_for_examine" 00:06:25.169 } 00:06:25.169 ] 00:06:25.169 } 00:06:25.169 ] 00:06:25.169 } 00:06:25.169 [2024-11-26 20:27:25.426553] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:25.169 [2024-11-26 20:27:25.426683] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60901 ] 00:06:25.428 [2024-11-26 20:27:25.588501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.428 [2024-11-26 20:27:25.653472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.428 [2024-11-26 20:27:25.710813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.800  [2024-11-26T20:27:27.155Z] Copying: 63/64 [MB] (63 MBps) [2024-11-26T20:27:27.155Z] Copying: 64/64 [MB] (average 63 MBps) 00:06:26.800 00:06:26.800 00:06:26.800 real 0m1.758s 00:06:26.800 user 0m1.519s 00:06:26.800 sys 0m1.353s 00:06:26.800 20:27:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.800 20:27:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:26.800 ************************************ 00:06:26.800 END TEST dd_copy_to_out_bdev 00:06:26.800 ************************************ 00:06:27.058 20:27:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:27.058 20:27:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:27.058 20:27:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.058 20:27:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.058 20:27:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:27.058 ************************************ 00:06:27.058 START TEST dd_offset_magic 00:06:27.058 ************************************ 00:06:27.058 20:27:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:06:27.058 20:27:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:27.058 20:27:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:27.058 20:27:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:27.058 20:27:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:27.058 20:27:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:27.058 20:27:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:27.058 20:27:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:27.058 20:27:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:27.058 [2024-11-26 20:27:27.226090] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:27.058 [2024-11-26 20:27:27.226176] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60941 ] 00:06:27.058 { 00:06:27.058 "subsystems": [ 00:06:27.058 { 00:06:27.058 "subsystem": "bdev", 00:06:27.058 "config": [ 00:06:27.058 { 00:06:27.058 "params": { 00:06:27.058 "trtype": "pcie", 00:06:27.058 "traddr": "0000:00:10.0", 00:06:27.058 "name": "Nvme0" 00:06:27.058 }, 00:06:27.058 "method": "bdev_nvme_attach_controller" 00:06:27.058 }, 00:06:27.058 { 00:06:27.058 "params": { 00:06:27.058 "trtype": "pcie", 00:06:27.058 "traddr": "0000:00:11.0", 00:06:27.058 "name": "Nvme1" 00:06:27.058 }, 00:06:27.058 "method": "bdev_nvme_attach_controller" 00:06:27.058 }, 00:06:27.058 { 00:06:27.058 "method": "bdev_wait_for_examine" 00:06:27.058 } 00:06:27.058 ] 00:06:27.058 } 00:06:27.058 ] 00:06:27.058 } 00:06:27.058 [2024-11-26 20:27:27.376109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.315 [2024-11-26 20:27:27.441937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.315 [2024-11-26 20:27:27.500460] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.572  [2024-11-26T20:27:28.184Z] Copying: 65/65 [MB] (average 955 MBps) 00:06:27.829 00:06:27.829 20:27:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:27.829 20:27:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:27.829 20:27:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:27.829 20:27:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:27.829 [2024-11-26 20:27:28.039293] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:27.829 [2024-11-26 20:27:28.039391] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60961 ] 00:06:27.829 { 00:06:27.829 "subsystems": [ 00:06:27.829 { 00:06:27.829 "subsystem": "bdev", 00:06:27.829 "config": [ 00:06:27.829 { 00:06:27.829 "params": { 00:06:27.829 "trtype": "pcie", 00:06:27.829 "traddr": "0000:00:10.0", 00:06:27.829 "name": "Nvme0" 00:06:27.829 }, 00:06:27.829 "method": "bdev_nvme_attach_controller" 00:06:27.829 }, 00:06:27.829 { 00:06:27.829 "params": { 00:06:27.829 "trtype": "pcie", 00:06:27.829 "traddr": "0000:00:11.0", 00:06:27.829 "name": "Nvme1" 00:06:27.829 }, 00:06:27.829 "method": "bdev_nvme_attach_controller" 00:06:27.829 }, 00:06:27.829 { 00:06:27.829 "method": "bdev_wait_for_examine" 00:06:27.829 } 00:06:27.829 ] 00:06:27.829 } 00:06:27.829 ] 00:06:27.829 } 00:06:28.088 [2024-11-26 20:27:28.187570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.088 [2024-11-26 20:27:28.246999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.088 [2024-11-26 20:27:28.301607] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.346  [2024-11-26T20:27:28.701Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:28.346 00:06:28.346 20:27:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:28.346 20:27:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:28.346 20:27:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:28.346 20:27:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:28.346 20:27:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:28.346 20:27:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:28.346 20:27:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:28.604 { 00:06:28.604 "subsystems": [ 00:06:28.604 { 00:06:28.604 "subsystem": "bdev", 00:06:28.604 "config": [ 00:06:28.604 { 00:06:28.604 "params": { 00:06:28.604 "trtype": "pcie", 00:06:28.604 "traddr": "0000:00:10.0", 00:06:28.604 "name": "Nvme0" 00:06:28.604 }, 00:06:28.604 "method": "bdev_nvme_attach_controller" 00:06:28.604 }, 00:06:28.604 { 00:06:28.604 "params": { 00:06:28.604 "trtype": "pcie", 00:06:28.604 "traddr": "0000:00:11.0", 00:06:28.604 "name": "Nvme1" 00:06:28.604 }, 00:06:28.604 "method": "bdev_nvme_attach_controller" 00:06:28.604 }, 00:06:28.604 { 00:06:28.604 "method": "bdev_wait_for_examine" 00:06:28.604 } 00:06:28.604 ] 00:06:28.604 } 00:06:28.604 ] 00:06:28.604 } 00:06:28.604 [2024-11-26 20:27:28.739200] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:28.604 [2024-11-26 20:27:28.739357] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60983 ] 00:06:28.604 [2024-11-26 20:27:28.894517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.604 [2024-11-26 20:27:28.949522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.862 [2024-11-26 20:27:29.002908] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.121  [2024-11-26T20:27:29.735Z] Copying: 65/65 [MB] (average 1101 MBps) 00:06:29.380 00:06:29.380 20:27:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:29.380 20:27:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:29.380 20:27:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:29.380 20:27:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:29.380 [2024-11-26 20:27:29.542323] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:29.380 [2024-11-26 20:27:29.542430] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60998 ] 00:06:29.380 { 00:06:29.380 "subsystems": [ 00:06:29.380 { 00:06:29.380 "subsystem": "bdev", 00:06:29.380 "config": [ 00:06:29.380 { 00:06:29.380 "params": { 00:06:29.380 "trtype": "pcie", 00:06:29.380 "traddr": "0000:00:10.0", 00:06:29.380 "name": "Nvme0" 00:06:29.380 }, 00:06:29.380 "method": "bdev_nvme_attach_controller" 00:06:29.380 }, 00:06:29.380 { 00:06:29.380 "params": { 00:06:29.380 "trtype": "pcie", 00:06:29.380 "traddr": "0000:00:11.0", 00:06:29.380 "name": "Nvme1" 00:06:29.380 }, 00:06:29.380 "method": "bdev_nvme_attach_controller" 00:06:29.380 }, 00:06:29.380 { 00:06:29.380 "method": "bdev_wait_for_examine" 00:06:29.380 } 00:06:29.380 ] 00:06:29.380 } 00:06:29.380 ] 00:06:29.380 } 00:06:29.380 [2024-11-26 20:27:29.688193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.639 [2024-11-26 20:27:29.748709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.639 [2024-11-26 20:27:29.804090] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.639  [2024-11-26T20:27:30.254Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:29.899 00:06:29.899 20:27:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:29.899 20:27:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:29.899 00:06:29.899 real 0m2.999s 00:06:29.899 user 0m2.181s 00:06:29.899 sys 0m0.897s 00:06:29.899 20:27:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.899 ************************************ 00:06:29.899 END TEST dd_offset_magic 00:06:29.899 ************************************ 00:06:29.899 20:27:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:29.899 20:27:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:29.899 20:27:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:29.899 20:27:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:29.899 20:27:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:29.899 20:27:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:29.899 20:27:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:29.899 20:27:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:29.899 20:27:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:29.899 20:27:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:29.899 20:27:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:29.899 20:27:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:30.158 [2024-11-26 20:27:30.262186] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:30.158 [2024-11-26 20:27:30.262293] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61029 ] 00:06:30.158 { 00:06:30.158 "subsystems": [ 00:06:30.158 { 00:06:30.158 "subsystem": "bdev", 00:06:30.158 "config": [ 00:06:30.158 { 00:06:30.158 "params": { 00:06:30.158 "trtype": "pcie", 00:06:30.158 "traddr": "0000:00:10.0", 00:06:30.158 "name": "Nvme0" 00:06:30.158 }, 00:06:30.158 "method": "bdev_nvme_attach_controller" 00:06:30.158 }, 00:06:30.158 { 00:06:30.158 "params": { 00:06:30.158 "trtype": "pcie", 00:06:30.158 "traddr": "0000:00:11.0", 00:06:30.158 "name": "Nvme1" 00:06:30.158 }, 00:06:30.158 "method": "bdev_nvme_attach_controller" 00:06:30.158 }, 00:06:30.158 { 00:06:30.158 "method": "bdev_wait_for_examine" 00:06:30.158 } 00:06:30.158 ] 00:06:30.158 } 00:06:30.158 ] 00:06:30.158 } 00:06:30.158 [2024-11-26 20:27:30.407630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.158 [2024-11-26 20:27:30.468656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.417 [2024-11-26 20:27:30.524804] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.417  [2024-11-26T20:27:31.031Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:06:30.676 00:06:30.676 20:27:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:30.676 20:27:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:30.676 20:27:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:30.676 20:27:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:30.676 20:27:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:30.676 20:27:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:30.676 20:27:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:30.676 20:27:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:30.676 20:27:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:30.676 20:27:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:30.676 [2024-11-26 20:27:30.949542] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:30.676 [2024-11-26 20:27:30.950267] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61050 ] 00:06:30.676 { 00:06:30.676 "subsystems": [ 00:06:30.676 { 00:06:30.676 "subsystem": "bdev", 00:06:30.676 "config": [ 00:06:30.676 { 00:06:30.676 "params": { 00:06:30.676 "trtype": "pcie", 00:06:30.676 "traddr": "0000:00:10.0", 00:06:30.676 "name": "Nvme0" 00:06:30.676 }, 00:06:30.676 "method": "bdev_nvme_attach_controller" 00:06:30.676 }, 00:06:30.676 { 00:06:30.676 "params": { 00:06:30.676 "trtype": "pcie", 00:06:30.676 "traddr": "0000:00:11.0", 00:06:30.676 "name": "Nvme1" 00:06:30.676 }, 00:06:30.676 "method": "bdev_nvme_attach_controller" 00:06:30.676 }, 00:06:30.676 { 00:06:30.676 "method": "bdev_wait_for_examine" 00:06:30.676 } 00:06:30.676 ] 00:06:30.676 } 00:06:30.676 ] 00:06:30.676 } 00:06:30.936 [2024-11-26 20:27:31.091037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.936 [2024-11-26 20:27:31.151753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.936 [2024-11-26 20:27:31.205671] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.194  [2024-11-26T20:27:31.807Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:06:31.452 00:06:31.452 20:27:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:31.452 ************************************ 00:06:31.452 END TEST spdk_dd_bdev_to_bdev 00:06:31.452 ************************************ 00:06:31.452 00:06:31.452 real 0m7.043s 00:06:31.452 user 0m5.169s 00:06:31.452 sys 0m3.235s 00:06:31.452 20:27:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.452 20:27:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:31.452 20:27:31 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:31.452 20:27:31 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:31.452 20:27:31 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.452 20:27:31 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.452 20:27:31 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:31.452 ************************************ 00:06:31.452 START TEST spdk_dd_uring 00:06:31.452 ************************************ 00:06:31.452 20:27:31 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:31.452 * Looking for test storage... 00:06:31.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:31.452 20:27:31 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:31.452 20:27:31 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:06:31.452 20:27:31 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:31.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.712 --rc genhtml_branch_coverage=1 00:06:31.712 --rc genhtml_function_coverage=1 00:06:31.712 --rc genhtml_legend=1 00:06:31.712 --rc geninfo_all_blocks=1 00:06:31.712 --rc geninfo_unexecuted_blocks=1 00:06:31.712 00:06:31.712 ' 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:31.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.712 --rc genhtml_branch_coverage=1 00:06:31.712 --rc genhtml_function_coverage=1 00:06:31.712 --rc genhtml_legend=1 00:06:31.712 --rc geninfo_all_blocks=1 00:06:31.712 --rc geninfo_unexecuted_blocks=1 00:06:31.712 00:06:31.712 ' 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:31.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.712 --rc genhtml_branch_coverage=1 00:06:31.712 --rc genhtml_function_coverage=1 00:06:31.712 --rc genhtml_legend=1 00:06:31.712 --rc geninfo_all_blocks=1 00:06:31.712 --rc geninfo_unexecuted_blocks=1 00:06:31.712 00:06:31.712 ' 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:31.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.712 --rc genhtml_branch_coverage=1 00:06:31.712 --rc genhtml_function_coverage=1 00:06:31.712 --rc genhtml_legend=1 00:06:31.712 --rc geninfo_all_blocks=1 00:06:31.712 --rc geninfo_unexecuted_blocks=1 00:06:31.712 00:06:31.712 ' 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.712 20:27:31 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:31.713 ************************************ 00:06:31.713 START TEST dd_uring_copy 00:06:31.713 ************************************ 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=2dem2hvmhcy3g6jxb2jk9yus1ddoe7grfjkc6x2kic4v5m5ym984au4zra5dnr4jzgdd9if9bjxaa4bg0r2cscupgj1n20cdvkhwo0l0j3yqbp7235coxdwxy6qh0zg0czkxcog86371weu6ygs9wz2jfiscdllx0ajt9ie20use2mdbr8qb5jltj8lcpk7u43bn6aoyat9d3n7to9u9fcj3qhu2u44fmpbb2xae2t60m9iqr05cpun5bhngoft2jg2lsy3fulfh2qlmg1hdkftd4uiewh82yr5pnlwxpgj6t4azjegncqcagwgu6x28vbk2dzh6dby34aw09l6zbrvk3q4l5xkp7ixfmmr26rcbcvrj45eiiiitdet7uyceiycnoe53ymc5sgi95i920af17zjnujbsujlaonpfqr6h7tydneu4sletqqbxzb64bzbdfohlli5i4xgf1abme9ivi5tm6abw5m5zyh09ffsj16zsl49hyvrmjy9dhd6k4cmo8bbg7elvw0xhlx5cluk6rnj0dhv7wcxu66v595zvae5dkfq7kg512wscah9kotg8xmg78hr5hcy4a08r6fnvzekewzhzp1azdhi7e2npnircep5vkqakt8neojbz1wqy71wyp1xp489zctmvrh2q4xiyiz2zn2ak34csge3u9c8t4t9k15et2xp99hjygs11vfbdgk0k6sg3795rhos4dh7w1ybp9u1khx1lbvw3tim3l7a889ihlkfd3s3sgfa8c6vkjyxhjhvi6v78i8tkiki3m3fijc6q6476j2r0lz05kmpkjv3mwh1stwqbj7fs3sgdvpnc8eztw6oj0xu1qy9c1yaogykn31lsqn85f1r24afhiz31li4aizbovkfz39cj1u4hmrel76ttxbb7snq8jyted5sg5nkt0ymhpfjkd0fldce9g1r1ujcr5mbjyuw30syjxzwdx5wbm9fta89gmhk5sep8tww622enre3yx5ue11ag74irlk6v 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 2dem2hvmhcy3g6jxb2jk9yus1ddoe7grfjkc6x2kic4v5m5ym984au4zra5dnr4jzgdd9if9bjxaa4bg0r2cscupgj1n20cdvkhwo0l0j3yqbp7235coxdwxy6qh0zg0czkxcog86371weu6ygs9wz2jfiscdllx0ajt9ie20use2mdbr8qb5jltj8lcpk7u43bn6aoyat9d3n7to9u9fcj3qhu2u44fmpbb2xae2t60m9iqr05cpun5bhngoft2jg2lsy3fulfh2qlmg1hdkftd4uiewh82yr5pnlwxpgj6t4azjegncqcagwgu6x28vbk2dzh6dby34aw09l6zbrvk3q4l5xkp7ixfmmr26rcbcvrj45eiiiitdet7uyceiycnoe53ymc5sgi95i920af17zjnujbsujlaonpfqr6h7tydneu4sletqqbxzb64bzbdfohlli5i4xgf1abme9ivi5tm6abw5m5zyh09ffsj16zsl49hyvrmjy9dhd6k4cmo8bbg7elvw0xhlx5cluk6rnj0dhv7wcxu66v595zvae5dkfq7kg512wscah9kotg8xmg78hr5hcy4a08r6fnvzekewzhzp1azdhi7e2npnircep5vkqakt8neojbz1wqy71wyp1xp489zctmvrh2q4xiyiz2zn2ak34csge3u9c8t4t9k15et2xp99hjygs11vfbdgk0k6sg3795rhos4dh7w1ybp9u1khx1lbvw3tim3l7a889ihlkfd3s3sgfa8c6vkjyxhjhvi6v78i8tkiki3m3fijc6q6476j2r0lz05kmpkjv3mwh1stwqbj7fs3sgdvpnc8eztw6oj0xu1qy9c1yaogykn31lsqn85f1r24afhiz31li4aizbovkfz39cj1u4hmrel76ttxbb7snq8jyted5sg5nkt0ymhpfjkd0fldce9g1r1ujcr5mbjyuw30syjxzwdx5wbm9fta89gmhk5sep8tww622enre3yx5ue11ag74irlk6v 00:06:31.713 20:27:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:31.713 [2024-11-26 20:27:31.939502] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:31.713 [2024-11-26 20:27:31.939627] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61129 ] 00:06:31.972 [2024-11-26 20:27:32.091536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.972 [2024-11-26 20:27:32.157126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.972 [2024-11-26 20:27:32.214703] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.541  [2024-11-26T20:27:33.463Z] Copying: 511/511 [MB] (average 1279 MBps) 00:06:33.108 00:06:33.108 20:27:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:33.108 20:27:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:06:33.108 20:27:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:33.108 20:27:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:33.108 [2024-11-26 20:27:33.276407] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:33.108 [2024-11-26 20:27:33.276502] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61147 ] 00:06:33.108 { 00:06:33.108 "subsystems": [ 00:06:33.108 { 00:06:33.108 "subsystem": "bdev", 00:06:33.108 "config": [ 00:06:33.108 { 00:06:33.108 "params": { 00:06:33.108 "block_size": 512, 00:06:33.108 "num_blocks": 1048576, 00:06:33.108 "name": "malloc0" 00:06:33.108 }, 00:06:33.108 "method": "bdev_malloc_create" 00:06:33.108 }, 00:06:33.108 { 00:06:33.108 "params": { 00:06:33.108 "filename": "/dev/zram1", 00:06:33.108 "name": "uring0" 00:06:33.108 }, 00:06:33.108 "method": "bdev_uring_create" 00:06:33.108 }, 00:06:33.108 { 00:06:33.108 "method": "bdev_wait_for_examine" 00:06:33.108 } 00:06:33.108 ] 00:06:33.108 } 00:06:33.108 ] 00:06:33.108 } 00:06:33.108 [2024-11-26 20:27:33.422263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.367 [2024-11-26 20:27:33.475896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.367 [2024-11-26 20:27:33.530584] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.745  [2024-11-26T20:27:36.058Z] Copying: 215/512 [MB] (215 MBps) [2024-11-26T20:27:36.343Z] Copying: 430/512 [MB] (215 MBps) [2024-11-26T20:27:36.602Z] Copying: 512/512 [MB] (average 215 MBps) 00:06:36.247 00:06:36.247 20:27:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:36.247 20:27:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:06:36.247 20:27:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:36.247 20:27:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:36.247 [2024-11-26 20:27:36.542491] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:36.247 [2024-11-26 20:27:36.542603] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61191 ] 00:06:36.247 { 00:06:36.247 "subsystems": [ 00:06:36.247 { 00:06:36.247 "subsystem": "bdev", 00:06:36.247 "config": [ 00:06:36.247 { 00:06:36.247 "params": { 00:06:36.247 "block_size": 512, 00:06:36.247 "num_blocks": 1048576, 00:06:36.247 "name": "malloc0" 00:06:36.247 }, 00:06:36.247 "method": "bdev_malloc_create" 00:06:36.247 }, 00:06:36.247 { 00:06:36.247 "params": { 00:06:36.247 "filename": "/dev/zram1", 00:06:36.247 "name": "uring0" 00:06:36.247 }, 00:06:36.247 "method": "bdev_uring_create" 00:06:36.247 }, 00:06:36.247 { 00:06:36.247 "method": "bdev_wait_for_examine" 00:06:36.247 } 00:06:36.247 ] 00:06:36.247 } 00:06:36.247 ] 00:06:36.247 } 00:06:36.506 [2024-11-26 20:27:36.691182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.506 [2024-11-26 20:27:36.747975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.506 [2024-11-26 20:27:36.801763] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.882  [2024-11-26T20:27:39.173Z] Copying: 188/512 [MB] (188 MBps) [2024-11-26T20:27:40.107Z] Copying: 363/512 [MB] (175 MBps) [2024-11-26T20:27:40.422Z] Copying: 512/512 [MB] (average 176 MBps) 00:06:40.067 00:06:40.067 20:27:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:06:40.068 20:27:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 2dem2hvmhcy3g6jxb2jk9yus1ddoe7grfjkc6x2kic4v5m5ym984au4zra5dnr4jzgdd9if9bjxaa4bg0r2cscupgj1n20cdvkhwo0l0j3yqbp7235coxdwxy6qh0zg0czkxcog86371weu6ygs9wz2jfiscdllx0ajt9ie20use2mdbr8qb5jltj8lcpk7u43bn6aoyat9d3n7to9u9fcj3qhu2u44fmpbb2xae2t60m9iqr05cpun5bhngoft2jg2lsy3fulfh2qlmg1hdkftd4uiewh82yr5pnlwxpgj6t4azjegncqcagwgu6x28vbk2dzh6dby34aw09l6zbrvk3q4l5xkp7ixfmmr26rcbcvrj45eiiiitdet7uyceiycnoe53ymc5sgi95i920af17zjnujbsujlaonpfqr6h7tydneu4sletqqbxzb64bzbdfohlli5i4xgf1abme9ivi5tm6abw5m5zyh09ffsj16zsl49hyvrmjy9dhd6k4cmo8bbg7elvw0xhlx5cluk6rnj0dhv7wcxu66v595zvae5dkfq7kg512wscah9kotg8xmg78hr5hcy4a08r6fnvzekewzhzp1azdhi7e2npnircep5vkqakt8neojbz1wqy71wyp1xp489zctmvrh2q4xiyiz2zn2ak34csge3u9c8t4t9k15et2xp99hjygs11vfbdgk0k6sg3795rhos4dh7w1ybp9u1khx1lbvw3tim3l7a889ihlkfd3s3sgfa8c6vkjyxhjhvi6v78i8tkiki3m3fijc6q6476j2r0lz05kmpkjv3mwh1stwqbj7fs3sgdvpnc8eztw6oj0xu1qy9c1yaogykn31lsqn85f1r24afhiz31li4aizbovkfz39cj1u4hmrel76ttxbb7snq8jyted5sg5nkt0ymhpfjkd0fldce9g1r1ujcr5mbjyuw30syjxzwdx5wbm9fta89gmhk5sep8tww622enre3yx5ue11ag74irlk6v == \2\d\e\m\2\h\v\m\h\c\y\3\g\6\j\x\b\2\j\k\9\y\u\s\1\d\d\o\e\7\g\r\f\j\k\c\6\x\2\k\i\c\4\v\5\m\5\y\m\9\8\4\a\u\4\z\r\a\5\d\n\r\4\j\z\g\d\d\9\i\f\9\b\j\x\a\a\4\b\g\0\r\2\c\s\c\u\p\g\j\1\n\2\0\c\d\v\k\h\w\o\0\l\0\j\3\y\q\b\p\7\2\3\5\c\o\x\d\w\x\y\6\q\h\0\z\g\0\c\z\k\x\c\o\g\8\6\3\7\1\w\e\u\6\y\g\s\9\w\z\2\j\f\i\s\c\d\l\l\x\0\a\j\t\9\i\e\2\0\u\s\e\2\m\d\b\r\8\q\b\5\j\l\t\j\8\l\c\p\k\7\u\4\3\b\n\6\a\o\y\a\t\9\d\3\n\7\t\o\9\u\9\f\c\j\3\q\h\u\2\u\4\4\f\m\p\b\b\2\x\a\e\2\t\6\0\m\9\i\q\r\0\5\c\p\u\n\5\b\h\n\g\o\f\t\2\j\g\2\l\s\y\3\f\u\l\f\h\2\q\l\m\g\1\h\d\k\f\t\d\4\u\i\e\w\h\8\2\y\r\5\p\n\l\w\x\p\g\j\6\t\4\a\z\j\e\g\n\c\q\c\a\g\w\g\u\6\x\2\8\v\b\k\2\d\z\h\6\d\b\y\3\4\a\w\0\9\l\6\z\b\r\v\k\3\q\4\l\5\x\k\p\7\i\x\f\m\m\r\2\6\r\c\b\c\v\r\j\4\5\e\i\i\i\i\t\d\e\t\7\u\y\c\e\i\y\c\n\o\e\5\3\y\m\c\5\s\g\i\9\5\i\9\2\0\a\f\1\7\z\j\n\u\j\b\s\u\j\l\a\o\n\p\f\q\r\6\h\7\t\y\d\n\e\u\4\s\l\e\t\q\q\b\x\z\b\6\4\b\z\b\d\f\o\h\l\l\i\5\i\4\x\g\f\1\a\b\m\e\9\i\v\i\5\t\m\6\a\b\w\5\m\5\z\y\h\0\9\f\f\s\j\1\6\z\s\l\4\9\h\y\v\r\m\j\y\9\d\h\d\6\k\4\c\m\o\8\b\b\g\7\e\l\v\w\0\x\h\l\x\5\c\l\u\k\6\r\n\j\0\d\h\v\7\w\c\x\u\6\6\v\5\9\5\z\v\a\e\5\d\k\f\q\7\k\g\5\1\2\w\s\c\a\h\9\k\o\t\g\8\x\m\g\7\8\h\r\5\h\c\y\4\a\0\8\r\6\f\n\v\z\e\k\e\w\z\h\z\p\1\a\z\d\h\i\7\e\2\n\p\n\i\r\c\e\p\5\v\k\q\a\k\t\8\n\e\o\j\b\z\1\w\q\y\7\1\w\y\p\1\x\p\4\8\9\z\c\t\m\v\r\h\2\q\4\x\i\y\i\z\2\z\n\2\a\k\3\4\c\s\g\e\3\u\9\c\8\t\4\t\9\k\1\5\e\t\2\x\p\9\9\h\j\y\g\s\1\1\v\f\b\d\g\k\0\k\6\s\g\3\7\9\5\r\h\o\s\4\d\h\7\w\1\y\b\p\9\u\1\k\h\x\1\l\b\v\w\3\t\i\m\3\l\7\a\8\8\9\i\h\l\k\f\d\3\s\3\s\g\f\a\8\c\6\v\k\j\y\x\h\j\h\v\i\6\v\7\8\i\8\t\k\i\k\i\3\m\3\f\i\j\c\6\q\6\4\7\6\j\2\r\0\l\z\0\5\k\m\p\k\j\v\3\m\w\h\1\s\t\w\q\b\j\7\f\s\3\s\g\d\v\p\n\c\8\e\z\t\w\6\o\j\0\x\u\1\q\y\9\c\1\y\a\o\g\y\k\n\3\1\l\s\q\n\8\5\f\1\r\2\4\a\f\h\i\z\3\1\l\i\4\a\i\z\b\o\v\k\f\z\3\9\c\j\1\u\4\h\m\r\e\l\7\6\t\t\x\b\b\7\s\n\q\8\j\y\t\e\d\5\s\g\5\n\k\t\0\y\m\h\p\f\j\k\d\0\f\l\d\c\e\9\g\1\r\1\u\j\c\r\5\m\b\j\y\u\w\3\0\s\y\j\x\z\w\d\x\5\w\b\m\9\f\t\a\8\9\g\m\h\k\5\s\e\p\8\t\w\w\6\2\2\e\n\r\e\3\y\x\5\u\e\1\1\a\g\7\4\i\r\l\k\6\v ]] 00:06:40.068 20:27:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:06:40.068 20:27:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 2dem2hvmhcy3g6jxb2jk9yus1ddoe7grfjkc6x2kic4v5m5ym984au4zra5dnr4jzgdd9if9bjxaa4bg0r2cscupgj1n20cdvkhwo0l0j3yqbp7235coxdwxy6qh0zg0czkxcog86371weu6ygs9wz2jfiscdllx0ajt9ie20use2mdbr8qb5jltj8lcpk7u43bn6aoyat9d3n7to9u9fcj3qhu2u44fmpbb2xae2t60m9iqr05cpun5bhngoft2jg2lsy3fulfh2qlmg1hdkftd4uiewh82yr5pnlwxpgj6t4azjegncqcagwgu6x28vbk2dzh6dby34aw09l6zbrvk3q4l5xkp7ixfmmr26rcbcvrj45eiiiitdet7uyceiycnoe53ymc5sgi95i920af17zjnujbsujlaonpfqr6h7tydneu4sletqqbxzb64bzbdfohlli5i4xgf1abme9ivi5tm6abw5m5zyh09ffsj16zsl49hyvrmjy9dhd6k4cmo8bbg7elvw0xhlx5cluk6rnj0dhv7wcxu66v595zvae5dkfq7kg512wscah9kotg8xmg78hr5hcy4a08r6fnvzekewzhzp1azdhi7e2npnircep5vkqakt8neojbz1wqy71wyp1xp489zctmvrh2q4xiyiz2zn2ak34csge3u9c8t4t9k15et2xp99hjygs11vfbdgk0k6sg3795rhos4dh7w1ybp9u1khx1lbvw3tim3l7a889ihlkfd3s3sgfa8c6vkjyxhjhvi6v78i8tkiki3m3fijc6q6476j2r0lz05kmpkjv3mwh1stwqbj7fs3sgdvpnc8eztw6oj0xu1qy9c1yaogykn31lsqn85f1r24afhiz31li4aizbovkfz39cj1u4hmrel76ttxbb7snq8jyted5sg5nkt0ymhpfjkd0fldce9g1r1ujcr5mbjyuw30syjxzwdx5wbm9fta89gmhk5sep8tww622enre3yx5ue11ag74irlk6v == \2\d\e\m\2\h\v\m\h\c\y\3\g\6\j\x\b\2\j\k\9\y\u\s\1\d\d\o\e\7\g\r\f\j\k\c\6\x\2\k\i\c\4\v\5\m\5\y\m\9\8\4\a\u\4\z\r\a\5\d\n\r\4\j\z\g\d\d\9\i\f\9\b\j\x\a\a\4\b\g\0\r\2\c\s\c\u\p\g\j\1\n\2\0\c\d\v\k\h\w\o\0\l\0\j\3\y\q\b\p\7\2\3\5\c\o\x\d\w\x\y\6\q\h\0\z\g\0\c\z\k\x\c\o\g\8\6\3\7\1\w\e\u\6\y\g\s\9\w\z\2\j\f\i\s\c\d\l\l\x\0\a\j\t\9\i\e\2\0\u\s\e\2\m\d\b\r\8\q\b\5\j\l\t\j\8\l\c\p\k\7\u\4\3\b\n\6\a\o\y\a\t\9\d\3\n\7\t\o\9\u\9\f\c\j\3\q\h\u\2\u\4\4\f\m\p\b\b\2\x\a\e\2\t\6\0\m\9\i\q\r\0\5\c\p\u\n\5\b\h\n\g\o\f\t\2\j\g\2\l\s\y\3\f\u\l\f\h\2\q\l\m\g\1\h\d\k\f\t\d\4\u\i\e\w\h\8\2\y\r\5\p\n\l\w\x\p\g\j\6\t\4\a\z\j\e\g\n\c\q\c\a\g\w\g\u\6\x\2\8\v\b\k\2\d\z\h\6\d\b\y\3\4\a\w\0\9\l\6\z\b\r\v\k\3\q\4\l\5\x\k\p\7\i\x\f\m\m\r\2\6\r\c\b\c\v\r\j\4\5\e\i\i\i\i\t\d\e\t\7\u\y\c\e\i\y\c\n\o\e\5\3\y\m\c\5\s\g\i\9\5\i\9\2\0\a\f\1\7\z\j\n\u\j\b\s\u\j\l\a\o\n\p\f\q\r\6\h\7\t\y\d\n\e\u\4\s\l\e\t\q\q\b\x\z\b\6\4\b\z\b\d\f\o\h\l\l\i\5\i\4\x\g\f\1\a\b\m\e\9\i\v\i\5\t\m\6\a\b\w\5\m\5\z\y\h\0\9\f\f\s\j\1\6\z\s\l\4\9\h\y\v\r\m\j\y\9\d\h\d\6\k\4\c\m\o\8\b\b\g\7\e\l\v\w\0\x\h\l\x\5\c\l\u\k\6\r\n\j\0\d\h\v\7\w\c\x\u\6\6\v\5\9\5\z\v\a\e\5\d\k\f\q\7\k\g\5\1\2\w\s\c\a\h\9\k\o\t\g\8\x\m\g\7\8\h\r\5\h\c\y\4\a\0\8\r\6\f\n\v\z\e\k\e\w\z\h\z\p\1\a\z\d\h\i\7\e\2\n\p\n\i\r\c\e\p\5\v\k\q\a\k\t\8\n\e\o\j\b\z\1\w\q\y\7\1\w\y\p\1\x\p\4\8\9\z\c\t\m\v\r\h\2\q\4\x\i\y\i\z\2\z\n\2\a\k\3\4\c\s\g\e\3\u\9\c\8\t\4\t\9\k\1\5\e\t\2\x\p\9\9\h\j\y\g\s\1\1\v\f\b\d\g\k\0\k\6\s\g\3\7\9\5\r\h\o\s\4\d\h\7\w\1\y\b\p\9\u\1\k\h\x\1\l\b\v\w\3\t\i\m\3\l\7\a\8\8\9\i\h\l\k\f\d\3\s\3\s\g\f\a\8\c\6\v\k\j\y\x\h\j\h\v\i\6\v\7\8\i\8\t\k\i\k\i\3\m\3\f\i\j\c\6\q\6\4\7\6\j\2\r\0\l\z\0\5\k\m\p\k\j\v\3\m\w\h\1\s\t\w\q\b\j\7\f\s\3\s\g\d\v\p\n\c\8\e\z\t\w\6\o\j\0\x\u\1\q\y\9\c\1\y\a\o\g\y\k\n\3\1\l\s\q\n\8\5\f\1\r\2\4\a\f\h\i\z\3\1\l\i\4\a\i\z\b\o\v\k\f\z\3\9\c\j\1\u\4\h\m\r\e\l\7\6\t\t\x\b\b\7\s\n\q\8\j\y\t\e\d\5\s\g\5\n\k\t\0\y\m\h\p\f\j\k\d\0\f\l\d\c\e\9\g\1\r\1\u\j\c\r\5\m\b\j\y\u\w\3\0\s\y\j\x\z\w\d\x\5\w\b\m\9\f\t\a\8\9\g\m\h\k\5\s\e\p\8\t\w\w\6\2\2\e\n\r\e\3\y\x\5\u\e\1\1\a\g\7\4\i\r\l\k\6\v ]] 00:06:40.068 20:27:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:40.637 20:27:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:06:40.637 20:27:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:06:40.637 20:27:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:40.637 20:27:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:40.637 [2024-11-26 20:27:40.751357] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:40.637 [2024-11-26 20:27:40.751464] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61271 ] 00:06:40.637 { 00:06:40.637 "subsystems": [ 00:06:40.637 { 00:06:40.637 "subsystem": "bdev", 00:06:40.637 "config": [ 00:06:40.637 { 00:06:40.637 "params": { 00:06:40.637 "block_size": 512, 00:06:40.637 "num_blocks": 1048576, 00:06:40.637 "name": "malloc0" 00:06:40.637 }, 00:06:40.637 "method": "bdev_malloc_create" 00:06:40.637 }, 00:06:40.637 { 00:06:40.637 "params": { 00:06:40.637 "filename": "/dev/zram1", 00:06:40.637 "name": "uring0" 00:06:40.637 }, 00:06:40.637 "method": "bdev_uring_create" 00:06:40.637 }, 00:06:40.637 { 00:06:40.637 "method": "bdev_wait_for_examine" 00:06:40.637 } 00:06:40.637 ] 00:06:40.637 } 00:06:40.637 ] 00:06:40.637 } 00:06:40.637 [2024-11-26 20:27:40.893299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.637 [2024-11-26 20:27:40.950515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.895 [2024-11-26 20:27:41.003101] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.269  [2024-11-26T20:27:43.236Z] Copying: 138/512 [MB] (138 MBps) [2024-11-26T20:27:44.611Z] Copying: 278/512 [MB] (140 MBps) [2024-11-26T20:27:44.870Z] Copying: 422/512 [MB] (144 MBps) [2024-11-26T20:27:45.438Z] Copying: 512/512 [MB] (average 141 MBps) 00:06:45.083 00:06:45.083 20:27:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:06:45.083 20:27:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:06:45.083 20:27:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:45.083 20:27:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:45.083 20:27:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:06:45.083 20:27:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:06:45.083 20:27:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:45.083 20:27:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:45.083 [2024-11-26 20:27:45.234594] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:45.084 [2024-11-26 20:27:45.234686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61335 ] 00:06:45.084 { 00:06:45.084 "subsystems": [ 00:06:45.084 { 00:06:45.084 "subsystem": "bdev", 00:06:45.084 "config": [ 00:06:45.084 { 00:06:45.084 "params": { 00:06:45.084 "block_size": 512, 00:06:45.084 "num_blocks": 1048576, 00:06:45.084 "name": "malloc0" 00:06:45.084 }, 00:06:45.084 "method": "bdev_malloc_create" 00:06:45.084 }, 00:06:45.084 { 00:06:45.084 "params": { 00:06:45.084 "filename": "/dev/zram1", 00:06:45.084 "name": "uring0" 00:06:45.084 }, 00:06:45.084 "method": "bdev_uring_create" 00:06:45.084 }, 00:06:45.084 { 00:06:45.084 "params": { 00:06:45.084 "name": "uring0" 00:06:45.084 }, 00:06:45.084 "method": "bdev_uring_delete" 00:06:45.084 }, 00:06:45.084 { 00:06:45.084 "method": "bdev_wait_for_examine" 00:06:45.084 } 00:06:45.084 ] 00:06:45.084 } 00:06:45.084 ] 00:06:45.084 } 00:06:45.084 [2024-11-26 20:27:45.376394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.084 [2024-11-26 20:27:45.430622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.343 [2024-11-26 20:27:45.486240] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.602  [2024-11-26T20:27:46.216Z] Copying: 0/0 [B] (average 0 Bps) 00:06:45.861 00:06:45.861 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:45.861 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:06:45.861 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:06:45.861 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:45.861 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.861 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:06:45.861 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:45.861 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.861 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.861 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:45.861 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.861 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.861 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.861 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.861 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:45.861 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:45.861 [2024-11-26 20:27:46.146949] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:45.861 [2024-11-26 20:27:46.147043] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61364 ] 00:06:45.861 { 00:06:45.861 "subsystems": [ 00:06:45.861 { 00:06:45.861 "subsystem": "bdev", 00:06:45.861 "config": [ 00:06:45.861 { 00:06:45.861 "params": { 00:06:45.861 "block_size": 512, 00:06:45.861 "num_blocks": 1048576, 00:06:45.861 "name": "malloc0" 00:06:45.861 }, 00:06:45.861 "method": "bdev_malloc_create" 00:06:45.861 }, 00:06:45.861 { 00:06:45.861 "params": { 00:06:45.861 "filename": "/dev/zram1", 00:06:45.861 "name": "uring0" 00:06:45.861 }, 00:06:45.861 "method": "bdev_uring_create" 00:06:45.861 }, 00:06:45.861 { 00:06:45.861 "params": { 00:06:45.861 "name": "uring0" 00:06:45.861 }, 00:06:45.861 "method": "bdev_uring_delete" 00:06:45.861 }, 00:06:45.861 { 00:06:45.861 "method": "bdev_wait_for_examine" 00:06:45.861 } 00:06:45.861 ] 00:06:45.861 } 00:06:45.861 ] 00:06:45.861 } 00:06:46.121 [2024-11-26 20:27:46.294584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.121 [2024-11-26 20:27:46.351944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.121 [2024-11-26 20:27:46.406525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.379 [2024-11-26 20:27:46.616839] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:06:46.379 [2024-11-26 20:27:46.616923] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:06:46.379 [2024-11-26 20:27:46.616935] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:06:46.379 [2024-11-26 20:27:46.616945] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.638 [2024-11-26 20:27:46.940411] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:46.896 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:06:46.896 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:46.897 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:06:46.897 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:06:46.897 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:06:46.897 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:46.897 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:06:46.897 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:06:46.897 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:06:46.897 20:27:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:06:46.897 20:27:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:06:46.897 20:27:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:47.155 00:06:47.155 real 0m15.464s 00:06:47.155 user 0m10.410s 00:06:47.155 sys 0m12.967s 00:06:47.155 20:27:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.155 20:27:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:47.155 ************************************ 00:06:47.155 END TEST dd_uring_copy 00:06:47.155 ************************************ 00:06:47.155 00:06:47.155 real 0m15.712s 00:06:47.155 user 0m10.544s 00:06:47.155 sys 0m13.082s 00:06:47.155 20:27:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.155 20:27:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:47.155 ************************************ 00:06:47.155 END TEST spdk_dd_uring 00:06:47.155 ************************************ 00:06:47.155 20:27:47 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:47.155 20:27:47 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.155 20:27:47 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.155 20:27:47 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:47.155 ************************************ 00:06:47.155 START TEST spdk_dd_sparse 00:06:47.155 ************************************ 00:06:47.156 20:27:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:47.156 * Looking for test storage... 00:06:47.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:47.156 20:27:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:47.156 20:27:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:06:47.156 20:27:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:47.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.415 --rc genhtml_branch_coverage=1 00:06:47.415 --rc genhtml_function_coverage=1 00:06:47.415 --rc genhtml_legend=1 00:06:47.415 --rc geninfo_all_blocks=1 00:06:47.415 --rc geninfo_unexecuted_blocks=1 00:06:47.415 00:06:47.415 ' 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:47.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.415 --rc genhtml_branch_coverage=1 00:06:47.415 --rc genhtml_function_coverage=1 00:06:47.415 --rc genhtml_legend=1 00:06:47.415 --rc geninfo_all_blocks=1 00:06:47.415 --rc geninfo_unexecuted_blocks=1 00:06:47.415 00:06:47.415 ' 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:47.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.415 --rc genhtml_branch_coverage=1 00:06:47.415 --rc genhtml_function_coverage=1 00:06:47.415 --rc genhtml_legend=1 00:06:47.415 --rc geninfo_all_blocks=1 00:06:47.415 --rc geninfo_unexecuted_blocks=1 00:06:47.415 00:06:47.415 ' 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:47.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.415 --rc genhtml_branch_coverage=1 00:06:47.415 --rc genhtml_function_coverage=1 00:06:47.415 --rc genhtml_legend=1 00:06:47.415 --rc geninfo_all_blocks=1 00:06:47.415 --rc geninfo_unexecuted_blocks=1 00:06:47.415 00:06:47.415 ' 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.415 20:27:47 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:06:47.416 1+0 records in 00:06:47.416 1+0 records out 00:06:47.416 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00717246 s, 585 MB/s 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:06:47.416 1+0 records in 00:06:47.416 1+0 records out 00:06:47.416 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00746588 s, 562 MB/s 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:06:47.416 1+0 records in 00:06:47.416 1+0 records out 00:06:47.416 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00760442 s, 552 MB/s 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:47.416 ************************************ 00:06:47.416 START TEST dd_sparse_file_to_file 00:06:47.416 ************************************ 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:47.416 20:27:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:47.416 { 00:06:47.416 "subsystems": [ 00:06:47.416 { 00:06:47.416 "subsystem": "bdev", 00:06:47.416 "config": [ 00:06:47.416 { 00:06:47.416 "params": { 00:06:47.416 "block_size": 4096, 00:06:47.416 "filename": "dd_sparse_aio_disk", 00:06:47.416 "name": "dd_aio" 00:06:47.416 }, 00:06:47.416 "method": "bdev_aio_create" 00:06:47.416 }, 00:06:47.416 { 00:06:47.416 "params": { 00:06:47.416 "lvs_name": "dd_lvstore", 00:06:47.416 "bdev_name": "dd_aio" 00:06:47.416 }, 00:06:47.416 "method": "bdev_lvol_create_lvstore" 00:06:47.416 }, 00:06:47.416 { 00:06:47.416 "method": "bdev_wait_for_examine" 00:06:47.416 } 00:06:47.416 ] 00:06:47.416 } 00:06:47.416 ] 00:06:47.416 } 00:06:47.416 [2024-11-26 20:27:47.714281] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:47.416 [2024-11-26 20:27:47.714443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61464 ] 00:06:47.675 [2024-11-26 20:27:47.866183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.676 [2024-11-26 20:27:47.927989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.676 [2024-11-26 20:27:47.984003] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.934  [2024-11-26T20:27:48.548Z] Copying: 12/36 [MB] (average 1090 MBps) 00:06:48.193 00:06:48.193 20:27:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:06:48.193 20:27:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:06:48.193 20:27:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:06:48.193 20:27:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:06:48.193 20:27:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:48.193 20:27:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:06:48.193 20:27:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:06:48.193 20:27:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:06:48.193 20:27:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:06:48.193 20:27:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:48.193 00:06:48.193 real 0m0.702s 00:06:48.193 user 0m0.435s 00:06:48.193 sys 0m0.361s 00:06:48.193 20:27:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.193 20:27:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:48.193 ************************************ 00:06:48.193 END TEST dd_sparse_file_to_file 00:06:48.193 ************************************ 00:06:48.193 20:27:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:06:48.193 20:27:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.193 20:27:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.193 20:27:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:48.193 ************************************ 00:06:48.193 START TEST dd_sparse_file_to_bdev 00:06:48.193 ************************************ 00:06:48.193 20:27:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:06:48.193 20:27:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:48.193 20:27:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:06:48.193 20:27:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:06:48.193 20:27:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:06:48.193 20:27:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:06:48.193 20:27:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:06:48.193 20:27:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:48.193 20:27:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:48.193 { 00:06:48.193 "subsystems": [ 00:06:48.193 { 00:06:48.193 "subsystem": "bdev", 00:06:48.193 "config": [ 00:06:48.193 { 00:06:48.193 "params": { 00:06:48.193 "block_size": 4096, 00:06:48.193 "filename": "dd_sparse_aio_disk", 00:06:48.193 "name": "dd_aio" 00:06:48.193 }, 00:06:48.193 "method": "bdev_aio_create" 00:06:48.193 }, 00:06:48.193 { 00:06:48.193 "params": { 00:06:48.193 "lvs_name": "dd_lvstore", 00:06:48.193 "lvol_name": "dd_lvol", 00:06:48.193 "size_in_mib": 36, 00:06:48.193 "thin_provision": true 00:06:48.193 }, 00:06:48.193 "method": "bdev_lvol_create" 00:06:48.193 }, 00:06:48.193 { 00:06:48.193 "method": "bdev_wait_for_examine" 00:06:48.193 } 00:06:48.193 ] 00:06:48.193 } 00:06:48.193 ] 00:06:48.193 } 00:06:48.193 [2024-11-26 20:27:48.448335] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:48.193 [2024-11-26 20:27:48.448446] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61507 ] 00:06:48.453 [2024-11-26 20:27:48.614397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.453 [2024-11-26 20:27:48.691717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.453 [2024-11-26 20:27:48.752983] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.712  [2024-11-26T20:27:49.328Z] Copying: 12/36 [MB] (average 500 MBps) 00:06:48.973 00:06:48.973 00:06:48.973 real 0m0.726s 00:06:48.973 user 0m0.471s 00:06:48.973 sys 0m0.392s 00:06:48.973 20:27:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.973 20:27:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:48.973 ************************************ 00:06:48.973 END TEST dd_sparse_file_to_bdev 00:06:48.973 ************************************ 00:06:48.973 20:27:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:06:48.973 20:27:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.973 20:27:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.973 20:27:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:48.973 ************************************ 00:06:48.973 START TEST dd_sparse_bdev_to_file 00:06:48.973 ************************************ 00:06:48.973 20:27:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:06:48.973 20:27:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:06:48.973 20:27:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:06:48.973 20:27:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:48.973 20:27:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:06:48.973 20:27:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:06:48.973 20:27:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:06:48.973 20:27:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:48.973 20:27:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:48.973 [2024-11-26 20:27:49.211754] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:48.973 [2024-11-26 20:27:49.211850] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61545 ] 00:06:48.973 { 00:06:48.973 "subsystems": [ 00:06:48.973 { 00:06:48.973 "subsystem": "bdev", 00:06:48.973 "config": [ 00:06:48.973 { 00:06:48.973 "params": { 00:06:48.973 "block_size": 4096, 00:06:48.973 "filename": "dd_sparse_aio_disk", 00:06:48.973 "name": "dd_aio" 00:06:48.973 }, 00:06:48.973 "method": "bdev_aio_create" 00:06:48.973 }, 00:06:48.973 { 00:06:48.973 "method": "bdev_wait_for_examine" 00:06:48.973 } 00:06:48.973 ] 00:06:48.973 } 00:06:48.973 ] 00:06:48.973 } 00:06:49.231 [2024-11-26 20:27:49.357193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.231 [2024-11-26 20:27:49.414380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.231 [2024-11-26 20:27:49.470197] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.231  [2024-11-26T20:27:49.845Z] Copying: 12/36 [MB] (average 923 MBps) 00:06:49.490 00:06:49.490 20:27:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:06:49.490 20:27:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:06:49.490 20:27:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:06:49.490 20:27:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:06:49.490 20:27:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:49.490 20:27:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:06:49.490 20:27:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:06:49.490 20:27:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:06:49.490 20:27:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:06:49.490 20:27:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:49.490 00:06:49.490 real 0m0.621s 00:06:49.490 user 0m0.375s 00:06:49.490 sys 0m0.349s 00:06:49.490 20:27:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.490 20:27:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:49.490 ************************************ 00:06:49.490 END TEST dd_sparse_bdev_to_file 00:06:49.490 ************************************ 00:06:49.490 20:27:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:06:49.490 20:27:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:06:49.490 20:27:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:06:49.490 20:27:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:06:49.490 20:27:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:06:49.490 00:06:49.490 real 0m2.426s 00:06:49.490 user 0m1.446s 00:06:49.490 sys 0m1.311s 00:06:49.490 20:27:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.490 ************************************ 00:06:49.490 END TEST spdk_dd_sparse 00:06:49.490 ************************************ 00:06:49.490 20:27:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:49.749 20:27:49 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:49.749 20:27:49 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.749 20:27:49 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.749 20:27:49 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:49.749 ************************************ 00:06:49.749 START TEST spdk_dd_negative 00:06:49.749 ************************************ 00:06:49.749 20:27:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:49.749 * Looking for test storage... 00:06:49.749 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:49.749 20:27:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:49.749 20:27:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:06:49.749 20:27:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:49.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.749 --rc genhtml_branch_coverage=1 00:06:49.749 --rc genhtml_function_coverage=1 00:06:49.749 --rc genhtml_legend=1 00:06:49.749 --rc geninfo_all_blocks=1 00:06:49.749 --rc geninfo_unexecuted_blocks=1 00:06:49.749 00:06:49.749 ' 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:49.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.749 --rc genhtml_branch_coverage=1 00:06:49.749 --rc genhtml_function_coverage=1 00:06:49.749 --rc genhtml_legend=1 00:06:49.749 --rc geninfo_all_blocks=1 00:06:49.749 --rc geninfo_unexecuted_blocks=1 00:06:49.749 00:06:49.749 ' 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:49.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.749 --rc genhtml_branch_coverage=1 00:06:49.749 --rc genhtml_function_coverage=1 00:06:49.749 --rc genhtml_legend=1 00:06:49.749 --rc geninfo_all_blocks=1 00:06:49.749 --rc geninfo_unexecuted_blocks=1 00:06:49.749 00:06:49.749 ' 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:49.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.749 --rc genhtml_branch_coverage=1 00:06:49.749 --rc genhtml_function_coverage=1 00:06:49.749 --rc genhtml_legend=1 00:06:49.749 --rc geninfo_all_blocks=1 00:06:49.749 --rc geninfo_unexecuted_blocks=1 00:06:49.749 00:06:49.749 ' 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:49.749 ************************************ 00:06:49.749 START TEST dd_invalid_arguments 00:06:49.749 ************************************ 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:49.749 20:27:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.750 20:27:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:49.750 20:27:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.750 20:27:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:49.750 20:27:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:49.750 20:27:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:50.009 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:06:50.009 00:06:50.009 CPU options: 00:06:50.009 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:06:50.009 (like [0,1,10]) 00:06:50.009 --lcores lcore to CPU mapping list. The list is in the format: 00:06:50.009 [<,lcores[@CPUs]>...] 00:06:50.009 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:50.009 Within the group, '-' is used for range separator, 00:06:50.009 ',' is used for single number separator. 00:06:50.009 '( )' can be omitted for single element group, 00:06:50.009 '@' can be omitted if cpus and lcores have the same value 00:06:50.009 --disable-cpumask-locks Disable CPU core lock files. 00:06:50.009 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:06:50.009 pollers in the app support interrupt mode) 00:06:50.009 -p, --main-core main (primary) core for DPDK 00:06:50.009 00:06:50.009 Configuration options: 00:06:50.009 -c, --config, --json JSON config file 00:06:50.009 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:50.009 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:06:50.009 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:50.009 --rpcs-allowed comma-separated list of permitted RPCS 00:06:50.009 --json-ignore-init-errors don't exit on invalid config entry 00:06:50.009 00:06:50.009 Memory options: 00:06:50.009 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:50.009 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:50.009 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:50.009 -R, --huge-unlink unlink huge files after initialization 00:06:50.009 -n, --mem-channels number of memory channels used for DPDK 00:06:50.009 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:50.009 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:50.009 --no-huge run without using hugepages 00:06:50.009 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:06:50.009 -i, --shm-id shared memory ID (optional) 00:06:50.009 -g, --single-file-segments force creating just one hugetlbfs file 00:06:50.009 00:06:50.009 PCI options: 00:06:50.009 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:50.009 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:50.009 -u, --no-pci disable PCI access 00:06:50.009 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:50.009 00:06:50.009 Log options: 00:06:50.009 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:06:50.009 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:06:50.009 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:06:50.009 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:06:50.009 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:06:50.009 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:06:50.009 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:06:50.009 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:06:50.009 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:06:50.009 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:06:50.009 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:06:50.009 --silence-noticelog disable notice level logging to stderr 00:06:50.009 00:06:50.009 Trace options: 00:06:50.009 --num-trace-entries number of trace entries for each core, must be power of 2, 00:06:50.009 setting 0 to disable trace (default 32768) 00:06:50.009 Tracepoints vary in size and can use more than one trace entry. 00:06:50.009 -e, --tpoint-group [:] 00:06:50.009 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:06:50.009 [2024-11-26 20:27:50.140537] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:06:50.009 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:06:50.009 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:06:50.009 bdev_raid, scheduler, all). 00:06:50.009 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:06:50.009 a tracepoint group. First tpoint inside a group can be enabled by 00:06:50.009 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:06:50.009 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:06:50.009 in /include/spdk_internal/trace_defs.h 00:06:50.009 00:06:50.009 Other options: 00:06:50.009 -h, --help show this usage 00:06:50.009 -v, --version print SPDK version 00:06:50.009 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:50.009 --env-context Opaque context for use of the env implementation 00:06:50.009 00:06:50.009 Application specific: 00:06:50.009 [--------- DD Options ---------] 00:06:50.009 --if Input file. Must specify either --if or --ib. 00:06:50.009 --ib Input bdev. Must specifier either --if or --ib 00:06:50.009 --of Output file. Must specify either --of or --ob. 00:06:50.009 --ob Output bdev. Must specify either --of or --ob. 00:06:50.009 --iflag Input file flags. 00:06:50.010 --oflag Output file flags. 00:06:50.010 --bs I/O unit size (default: 4096) 00:06:50.010 --qd Queue depth (default: 2) 00:06:50.010 --count I/O unit count. The number of I/O units to copy. (default: all) 00:06:50.010 --skip Skip this many I/O units at start of input. (default: 0) 00:06:50.010 --seek Skip this many I/O units at start of output. (default: 0) 00:06:50.010 --aio Force usage of AIO. (by default io_uring is used if available) 00:06:50.010 --sparse Enable hole skipping in input target 00:06:50.010 Available iflag and oflag values: 00:06:50.010 append - append mode 00:06:50.010 direct - use direct I/O for data 00:06:50.010 directory - fail unless a directory 00:06:50.010 dsync - use synchronized I/O for data 00:06:50.010 noatime - do not update access time 00:06:50.010 noctty - do not assign controlling terminal from file 00:06:50.010 nofollow - do not follow symlinks 00:06:50.010 nonblock - use non-blocking I/O 00:06:50.010 sync - use synchronized I/O for data and metadata 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:06:50.010 ************************************ 00:06:50.010 END TEST dd_invalid_arguments 00:06:50.010 ************************************ 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.010 00:06:50.010 real 0m0.078s 00:06:50.010 user 0m0.044s 00:06:50.010 sys 0m0.031s 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:50.010 ************************************ 00:06:50.010 START TEST dd_double_input 00:06:50.010 ************************************ 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:50.010 [2024-11-26 20:27:50.269083] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:50.010 ************************************ 00:06:50.010 END TEST dd_double_input 00:06:50.010 ************************************ 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.010 00:06:50.010 real 0m0.077s 00:06:50.010 user 0m0.044s 00:06:50.010 sys 0m0.030s 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:50.010 ************************************ 00:06:50.010 START TEST dd_double_output 00:06:50.010 ************************************ 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:50.010 20:27:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:50.269 [2024-11-26 20:27:50.399852] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.269 00:06:50.269 real 0m0.079s 00:06:50.269 user 0m0.046s 00:06:50.269 sys 0m0.032s 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.269 ************************************ 00:06:50.269 END TEST dd_double_output 00:06:50.269 ************************************ 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:50.269 ************************************ 00:06:50.269 START TEST dd_no_input 00:06:50.269 ************************************ 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:50.269 [2024-11-26 20:27:50.527389] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.269 00:06:50.269 real 0m0.076s 00:06:50.269 user 0m0.049s 00:06:50.269 sys 0m0.026s 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.269 ************************************ 00:06:50.269 END TEST dd_no_input 00:06:50.269 ************************************ 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:50.269 ************************************ 00:06:50.269 START TEST dd_no_output 00:06:50.269 ************************************ 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:06:50.269 20:27:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:50.270 20:27:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.270 20:27:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.270 20:27:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.270 20:27:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.270 20:27:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.270 20:27:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.270 20:27:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.270 20:27:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:50.270 20:27:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:50.529 [2024-11-26 20:27:50.653686] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.529 00:06:50.529 real 0m0.075s 00:06:50.529 user 0m0.040s 00:06:50.529 sys 0m0.034s 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.529 ************************************ 00:06:50.529 END TEST dd_no_output 00:06:50.529 ************************************ 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:50.529 ************************************ 00:06:50.529 START TEST dd_wrong_blocksize 00:06:50.529 ************************************ 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:50.529 [2024-11-26 20:27:50.774458] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.529 00:06:50.529 real 0m0.065s 00:06:50.529 user 0m0.041s 00:06:50.529 sys 0m0.024s 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.529 ************************************ 00:06:50.529 END TEST dd_wrong_blocksize 00:06:50.529 ************************************ 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:50.529 ************************************ 00:06:50.529 START TEST dd_smaller_blocksize 00:06:50.529 ************************************ 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:50.529 20:27:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:50.788 [2024-11-26 20:27:50.895907] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:50.788 [2024-11-26 20:27:50.896122] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61771 ] 00:06:50.788 [2024-11-26 20:27:51.042614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.788 [2024-11-26 20:27:51.107194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.046 [2024-11-26 20:27:51.164668] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.304 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:51.563 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:51.563 [2024-11-26 20:27:51.777827] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:06:51.563 [2024-11-26 20:27:51.777891] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.563 [2024-11-26 20:27:51.901990] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:51.822 20:27:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:06:51.822 20:27:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:51.822 20:27:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:06:51.822 20:27:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:06:51.822 20:27:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:06:51.822 20:27:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:51.822 00:06:51.822 real 0m1.128s 00:06:51.822 user 0m0.426s 00:06:51.822 sys 0m0.593s 00:06:51.822 20:27:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.822 ************************************ 00:06:51.822 END TEST dd_smaller_blocksize 00:06:51.822 ************************************ 00:06:51.822 20:27:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:51.822 20:27:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:06:51.822 20:27:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.822 20:27:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.822 20:27:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:51.822 ************************************ 00:06:51.822 START TEST dd_invalid_count 00:06:51.822 ************************************ 00:06:51.822 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:06:51.822 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:51.822 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:06:51.822 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:51.822 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.822 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.822 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.822 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.822 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.822 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.822 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.822 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:51.822 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:51.822 [2024-11-26 20:27:52.080759] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:06:51.822 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:06:51.822 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:51.822 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:51.822 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:51.822 00:06:51.822 real 0m0.084s 00:06:51.822 user 0m0.053s 00:06:51.822 sys 0m0.029s 00:06:51.822 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.822 ************************************ 00:06:51.822 END TEST dd_invalid_count 00:06:51.822 ************************************ 00:06:51.823 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:06:51.823 20:27:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:06:51.823 20:27:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.823 20:27:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.823 20:27:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:51.823 ************************************ 00:06:51.823 START TEST dd_invalid_oflag 00:06:51.823 ************************************ 00:06:51.823 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:06:51.823 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:51.823 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:06:51.823 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:51.823 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.823 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.823 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.823 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.823 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.823 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.823 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.823 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:51.823 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:52.081 [2024-11-26 20:27:52.197735] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:06:52.081 ************************************ 00:06:52.081 END TEST dd_invalid_oflag 00:06:52.081 ************************************ 00:06:52.081 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:06:52.081 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.081 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:52.081 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.081 00:06:52.082 real 0m0.065s 00:06:52.082 user 0m0.043s 00:06:52.082 sys 0m0.021s 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:52.082 ************************************ 00:06:52.082 START TEST dd_invalid_iflag 00:06:52.082 ************************************ 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:52.082 [2024-11-26 20:27:52.313734] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.082 00:06:52.082 real 0m0.075s 00:06:52.082 user 0m0.047s 00:06:52.082 sys 0m0.028s 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.082 ************************************ 00:06:52.082 END TEST dd_invalid_iflag 00:06:52.082 ************************************ 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:52.082 ************************************ 00:06:52.082 START TEST dd_unknown_flag 00:06:52.082 ************************************ 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.082 20:27:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:52.340 [2024-11-26 20:27:52.440620] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:52.340 [2024-11-26 20:27:52.440716] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61869 ] 00:06:52.340 [2024-11-26 20:27:52.590196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.340 [2024-11-26 20:27:52.656930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.598 [2024-11-26 20:27:52.713636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.599 [2024-11-26 20:27:52.751093] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:06:52.599 [2024-11-26 20:27:52.751453] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.599 [2024-11-26 20:27:52.751526] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:06:52.599 [2024-11-26 20:27:52.751541] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.599 [2024-11-26 20:27:52.751785] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:06:52.599 [2024-11-26 20:27:52.751802] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.599 [2024-11-26 20:27:52.751856] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:52.599 [2024-11-26 20:27:52.751866] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:52.599 [2024-11-26 20:27:52.871131] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:52.599 20:27:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:06:52.599 20:27:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.599 20:27:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:06:52.599 20:27:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:06:52.599 20:27:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:06:52.599 20:27:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.599 00:06:52.599 real 0m0.556s 00:06:52.599 user 0m0.305s 00:06:52.599 sys 0m0.156s 00:06:52.599 20:27:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.599 20:27:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:06:52.599 ************************************ 00:06:52.599 END TEST dd_unknown_flag 00:06:52.599 ************************************ 00:06:52.857 20:27:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:06:52.857 20:27:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.857 20:27:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.857 20:27:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:52.857 ************************************ 00:06:52.857 START TEST dd_invalid_json 00:06:52.857 ************************************ 00:06:52.857 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:06:52.857 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:52.857 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:06:52.857 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:06:52.857 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:52.857 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.857 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.857 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.857 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.858 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.858 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.858 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.858 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.858 20:27:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:52.858 [2024-11-26 20:27:53.037455] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:52.858 [2024-11-26 20:27:53.037543] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61903 ] 00:06:52.858 [2024-11-26 20:27:53.191168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.117 [2024-11-26 20:27:53.254737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.117 [2024-11-26 20:27:53.255069] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:06:53.117 [2024-11-26 20:27:53.255101] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:53.117 [2024-11-26 20:27:53.255114] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:53.117 [2024-11-26 20:27:53.255161] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:53.117 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:06:53.117 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:53.117 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:06:53.117 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:06:53.117 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:06:53.117 ************************************ 00:06:53.117 END TEST dd_invalid_json 00:06:53.117 ************************************ 00:06:53.117 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:53.117 00:06:53.117 real 0m0.350s 00:06:53.117 user 0m0.177s 00:06:53.117 sys 0m0.071s 00:06:53.117 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.117 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:06:53.117 20:27:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:06:53.118 20:27:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.118 20:27:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.118 20:27:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:53.118 ************************************ 00:06:53.118 START TEST dd_invalid_seek 00:06:53.118 ************************************ 00:06:53.118 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:06:53.118 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:53.118 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:53.118 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:06:53.118 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:53.118 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:53.118 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:06:53.118 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:53.118 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:06:53.118 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:53.118 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:06:53.118 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.118 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:06:53.118 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:06:53.118 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.118 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.118 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.118 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.118 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.118 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.118 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:53.118 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:53.118 { 00:06:53.118 "subsystems": [ 00:06:53.118 { 00:06:53.118 "subsystem": "bdev", 00:06:53.118 "config": [ 00:06:53.118 { 00:06:53.118 "params": { 00:06:53.118 "block_size": 512, 00:06:53.118 "num_blocks": 512, 00:06:53.118 "name": "malloc0" 00:06:53.118 }, 00:06:53.118 "method": "bdev_malloc_create" 00:06:53.118 }, 00:06:53.118 { 00:06:53.118 "params": { 00:06:53.118 "block_size": 512, 00:06:53.118 "num_blocks": 512, 00:06:53.118 "name": "malloc1" 00:06:53.118 }, 00:06:53.118 "method": "bdev_malloc_create" 00:06:53.118 }, 00:06:53.118 { 00:06:53.118 "method": "bdev_wait_for_examine" 00:06:53.118 } 00:06:53.118 ] 00:06:53.118 } 00:06:53.118 ] 00:06:53.118 } 00:06:53.118 [2024-11-26 20:27:53.440335] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:53.118 [2024-11-26 20:27:53.440432] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61927 ] 00:06:53.376 [2024-11-26 20:27:53.590647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.376 [2024-11-26 20:27:53.655621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.376 [2024-11-26 20:27:53.713805] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.634 [2024-11-26 20:27:53.780903] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:06:53.634 [2024-11-26 20:27:53.780983] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:53.634 [2024-11-26 20:27:53.903180] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:53.634 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:06:53.634 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:53.634 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:06:53.634 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:06:53.634 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:06:53.634 ************************************ 00:06:53.634 END TEST dd_invalid_seek 00:06:53.634 ************************************ 00:06:53.634 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:53.634 00:06:53.634 real 0m0.592s 00:06:53.634 user 0m0.386s 00:06:53.634 sys 0m0.163s 00:06:53.634 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.634 20:27:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:53.893 ************************************ 00:06:53.893 START TEST dd_invalid_skip 00:06:53.893 ************************************ 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:53.893 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:53.893 { 00:06:53.893 "subsystems": [ 00:06:53.893 { 00:06:53.893 "subsystem": "bdev", 00:06:53.893 "config": [ 00:06:53.893 { 00:06:53.893 "params": { 00:06:53.893 "block_size": 512, 00:06:53.893 "num_blocks": 512, 00:06:53.893 "name": "malloc0" 00:06:53.893 }, 00:06:53.893 "method": "bdev_malloc_create" 00:06:53.893 }, 00:06:53.893 { 00:06:53.893 "params": { 00:06:53.893 "block_size": 512, 00:06:53.893 "num_blocks": 512, 00:06:53.893 "name": "malloc1" 00:06:53.893 }, 00:06:53.893 "method": "bdev_malloc_create" 00:06:53.893 }, 00:06:53.893 { 00:06:53.893 "method": "bdev_wait_for_examine" 00:06:53.893 } 00:06:53.893 ] 00:06:53.893 } 00:06:53.893 ] 00:06:53.893 } 00:06:53.893 [2024-11-26 20:27:54.072477] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:53.893 [2024-11-26 20:27:54.072595] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61966 ] 00:06:53.893 [2024-11-26 20:27:54.219167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.152 [2024-11-26 20:27:54.275817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.152 [2024-11-26 20:27:54.332957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.152 [2024-11-26 20:27:54.400530] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:06:54.152 [2024-11-26 20:27:54.400807] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:54.411 [2024-11-26 20:27:54.529162] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:54.411 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:06:54.411 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:54.411 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:06:54.411 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:06:54.411 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:06:54.411 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:54.411 00:06:54.411 real 0m0.583s 00:06:54.411 user 0m0.380s 00:06:54.411 sys 0m0.157s 00:06:54.411 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.411 ************************************ 00:06:54.411 END TEST dd_invalid_skip 00:06:54.411 ************************************ 00:06:54.411 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:06:54.411 20:27:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:06:54.411 20:27:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.411 20:27:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.411 20:27:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:54.411 ************************************ 00:06:54.411 START TEST dd_invalid_input_count 00:06:54.411 ************************************ 00:06:54.411 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:06:54.411 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:54.411 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:54.411 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:06:54.411 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:54.411 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:54.411 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:06:54.412 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:54.412 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:06:54.412 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:54.412 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:06:54.412 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.412 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:06:54.412 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:06:54.412 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.412 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.412 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.412 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.412 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.412 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.412 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:54.412 20:27:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:54.412 [2024-11-26 20:27:54.700476] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:54.412 [2024-11-26 20:27:54.700630] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61994 ] 00:06:54.412 { 00:06:54.412 "subsystems": [ 00:06:54.412 { 00:06:54.412 "subsystem": "bdev", 00:06:54.412 "config": [ 00:06:54.412 { 00:06:54.412 "params": { 00:06:54.412 "block_size": 512, 00:06:54.412 "num_blocks": 512, 00:06:54.412 "name": "malloc0" 00:06:54.412 }, 00:06:54.412 "method": "bdev_malloc_create" 00:06:54.412 }, 00:06:54.412 { 00:06:54.412 "params": { 00:06:54.412 "block_size": 512, 00:06:54.412 "num_blocks": 512, 00:06:54.412 "name": "malloc1" 00:06:54.412 }, 00:06:54.412 "method": "bdev_malloc_create" 00:06:54.412 }, 00:06:54.412 { 00:06:54.412 "method": "bdev_wait_for_examine" 00:06:54.412 } 00:06:54.412 ] 00:06:54.412 } 00:06:54.412 ] 00:06:54.412 } 00:06:54.671 [2024-11-26 20:27:54.841594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.671 [2024-11-26 20:27:54.908441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.671 [2024-11-26 20:27:54.964600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.928 [2024-11-26 20:27:55.030784] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:06:54.928 [2024-11-26 20:27:55.030838] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:54.928 [2024-11-26 20:27:55.156860] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:54.928 00:06:54.928 real 0m0.577s 00:06:54.928 user 0m0.375s 00:06:54.928 sys 0m0.164s 00:06:54.928 ************************************ 00:06:54.928 END TEST dd_invalid_input_count 00:06:54.928 ************************************ 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:54.928 ************************************ 00:06:54.928 START TEST dd_invalid_output_count 00:06:54.928 ************************************ 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.928 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.929 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.929 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:54.929 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:55.186 { 00:06:55.186 "subsystems": [ 00:06:55.186 { 00:06:55.186 "subsystem": "bdev", 00:06:55.186 "config": [ 00:06:55.186 { 00:06:55.186 "params": { 00:06:55.186 "block_size": 512, 00:06:55.186 "num_blocks": 512, 00:06:55.186 "name": "malloc0" 00:06:55.186 }, 00:06:55.186 "method": "bdev_malloc_create" 00:06:55.186 }, 00:06:55.186 { 00:06:55.186 "method": "bdev_wait_for_examine" 00:06:55.186 } 00:06:55.186 ] 00:06:55.186 } 00:06:55.186 ] 00:06:55.186 } 00:06:55.186 [2024-11-26 20:27:55.328322] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:55.186 [2024-11-26 20:27:55.328417] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62033 ] 00:06:55.186 [2024-11-26 20:27:55.476459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.186 [2024-11-26 20:27:55.537109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.444 [2024-11-26 20:27:55.593988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.444 [2024-11-26 20:27:55.650880] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:06:55.444 [2024-11-26 20:27:55.650991] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.444 [2024-11-26 20:27:55.773747] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:55.704 ************************************ 00:06:55.704 END TEST dd_invalid_output_count 00:06:55.704 ************************************ 00:06:55.704 00:06:55.704 real 0m0.573s 00:06:55.704 user 0m0.376s 00:06:55.704 sys 0m0.152s 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:55.704 ************************************ 00:06:55.704 START TEST dd_bs_not_multiple 00:06:55.704 ************************************ 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:55.704 20:27:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:55.704 [2024-11-26 20:27:55.944699] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:55.704 [2024-11-26 20:27:55.944787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62059 ] 00:06:55.704 { 00:06:55.704 "subsystems": [ 00:06:55.704 { 00:06:55.704 "subsystem": "bdev", 00:06:55.704 "config": [ 00:06:55.704 { 00:06:55.704 "params": { 00:06:55.704 "block_size": 512, 00:06:55.704 "num_blocks": 512, 00:06:55.704 "name": "malloc0" 00:06:55.704 }, 00:06:55.704 "method": "bdev_malloc_create" 00:06:55.704 }, 00:06:55.704 { 00:06:55.704 "params": { 00:06:55.704 "block_size": 512, 00:06:55.704 "num_blocks": 512, 00:06:55.704 "name": "malloc1" 00:06:55.704 }, 00:06:55.704 "method": "bdev_malloc_create" 00:06:55.704 }, 00:06:55.704 { 00:06:55.704 "method": "bdev_wait_for_examine" 00:06:55.704 } 00:06:55.704 ] 00:06:55.704 } 00:06:55.704 ] 00:06:55.704 } 00:06:55.963 [2024-11-26 20:27:56.087804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.963 [2024-11-26 20:27:56.145951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.963 [2024-11-26 20:27:56.202401] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.963 [2024-11-26 20:27:56.268208] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:06:55.963 [2024-11-26 20:27:56.268297] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:56.222 [2024-11-26 20:27:56.395088] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:56.222 20:27:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:06:56.222 20:27:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:56.222 20:27:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:06:56.222 20:27:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:06:56.222 20:27:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:06:56.222 20:27:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:56.222 00:06:56.222 real 0m0.573s 00:06:56.222 user 0m0.379s 00:06:56.222 sys 0m0.154s 00:06:56.222 20:27:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.222 20:27:56 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:06:56.222 ************************************ 00:06:56.222 END TEST dd_bs_not_multiple 00:06:56.222 ************************************ 00:06:56.222 00:06:56.222 real 0m6.611s 00:06:56.222 user 0m3.565s 00:06:56.222 sys 0m2.466s 00:06:56.222 20:27:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.222 20:27:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:56.222 ************************************ 00:06:56.222 END TEST spdk_dd_negative 00:06:56.222 ************************************ 00:06:56.222 00:06:56.222 real 1m19.134s 00:06:56.222 user 0m50.924s 00:06:56.222 sys 0m34.847s 00:06:56.222 20:27:56 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.222 ************************************ 00:06:56.222 END TEST spdk_dd 00:06:56.222 ************************************ 00:06:56.222 20:27:56 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:56.481 20:27:56 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:56.481 20:27:56 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:56.481 20:27:56 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:56.481 20:27:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:56.481 20:27:56 -- common/autotest_common.sh@10 -- # set +x 00:06:56.481 20:27:56 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:56.481 20:27:56 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:56.481 20:27:56 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:56.481 20:27:56 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:56.481 20:27:56 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:56.481 20:27:56 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:56.481 20:27:56 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:56.481 20:27:56 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:56.481 20:27:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.481 20:27:56 -- common/autotest_common.sh@10 -- # set +x 00:06:56.481 ************************************ 00:06:56.481 START TEST nvmf_tcp 00:06:56.481 ************************************ 00:06:56.481 20:27:56 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:56.481 * Looking for test storage... 00:06:56.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:56.481 20:27:56 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:56.481 20:27:56 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:56.481 20:27:56 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:56.481 20:27:56 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.481 20:27:56 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:56.481 20:27:56 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.481 20:27:56 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:56.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.481 --rc genhtml_branch_coverage=1 00:06:56.481 --rc genhtml_function_coverage=1 00:06:56.481 --rc genhtml_legend=1 00:06:56.481 --rc geninfo_all_blocks=1 00:06:56.481 --rc geninfo_unexecuted_blocks=1 00:06:56.481 00:06:56.481 ' 00:06:56.481 20:27:56 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:56.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.481 --rc genhtml_branch_coverage=1 00:06:56.481 --rc genhtml_function_coverage=1 00:06:56.481 --rc genhtml_legend=1 00:06:56.481 --rc geninfo_all_blocks=1 00:06:56.481 --rc geninfo_unexecuted_blocks=1 00:06:56.481 00:06:56.481 ' 00:06:56.481 20:27:56 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:56.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.481 --rc genhtml_branch_coverage=1 00:06:56.481 --rc genhtml_function_coverage=1 00:06:56.481 --rc genhtml_legend=1 00:06:56.481 --rc geninfo_all_blocks=1 00:06:56.481 --rc geninfo_unexecuted_blocks=1 00:06:56.481 00:06:56.481 ' 00:06:56.481 20:27:56 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:56.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.481 --rc genhtml_branch_coverage=1 00:06:56.481 --rc genhtml_function_coverage=1 00:06:56.481 --rc genhtml_legend=1 00:06:56.481 --rc geninfo_all_blocks=1 00:06:56.481 --rc geninfo_unexecuted_blocks=1 00:06:56.481 00:06:56.481 ' 00:06:56.481 20:27:56 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:56.481 20:27:56 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:56.481 20:27:56 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:56.481 20:27:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:56.481 20:27:56 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.481 20:27:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:56.481 ************************************ 00:06:56.481 START TEST nvmf_target_core 00:06:56.481 ************************************ 00:06:56.481 20:27:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:56.742 * Looking for test storage... 00:06:56.742 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:56.742 20:27:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:56.742 20:27:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:56.742 20:27:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:56.742 20:27:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:56.742 20:27:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.742 20:27:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.742 20:27:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.742 20:27:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.742 20:27:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.743 20:27:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.743 20:27:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.743 20:27:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.743 20:27:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:56.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.743 --rc genhtml_branch_coverage=1 00:06:56.743 --rc genhtml_function_coverage=1 00:06:56.743 --rc genhtml_legend=1 00:06:56.743 --rc geninfo_all_blocks=1 00:06:56.743 --rc geninfo_unexecuted_blocks=1 00:06:56.743 00:06:56.743 ' 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:56.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.743 --rc genhtml_branch_coverage=1 00:06:56.743 --rc genhtml_function_coverage=1 00:06:56.743 --rc genhtml_legend=1 00:06:56.743 --rc geninfo_all_blocks=1 00:06:56.743 --rc geninfo_unexecuted_blocks=1 00:06:56.743 00:06:56.743 ' 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:56.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.743 --rc genhtml_branch_coverage=1 00:06:56.743 --rc genhtml_function_coverage=1 00:06:56.743 --rc genhtml_legend=1 00:06:56.743 --rc geninfo_all_blocks=1 00:06:56.743 --rc geninfo_unexecuted_blocks=1 00:06:56.743 00:06:56.743 ' 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:56.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.743 --rc genhtml_branch_coverage=1 00:06:56.743 --rc genhtml_function_coverage=1 00:06:56.743 --rc genhtml_legend=1 00:06:56.743 --rc geninfo_all_blocks=1 00:06:56.743 --rc geninfo_unexecuted_blocks=1 00:06:56.743 00:06:56.743 ' 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:56.743 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:56.743 ************************************ 00:06:56.743 START TEST nvmf_host_management 00:06:56.743 ************************************ 00:06:56.743 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:57.002 * Looking for test storage... 00:06:57.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:57.002 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:57.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.003 --rc genhtml_branch_coverage=1 00:06:57.003 --rc genhtml_function_coverage=1 00:06:57.003 --rc genhtml_legend=1 00:06:57.003 --rc geninfo_all_blocks=1 00:06:57.003 --rc geninfo_unexecuted_blocks=1 00:06:57.003 00:06:57.003 ' 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:57.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.003 --rc genhtml_branch_coverage=1 00:06:57.003 --rc genhtml_function_coverage=1 00:06:57.003 --rc genhtml_legend=1 00:06:57.003 --rc geninfo_all_blocks=1 00:06:57.003 --rc geninfo_unexecuted_blocks=1 00:06:57.003 00:06:57.003 ' 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:57.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.003 --rc genhtml_branch_coverage=1 00:06:57.003 --rc genhtml_function_coverage=1 00:06:57.003 --rc genhtml_legend=1 00:06:57.003 --rc geninfo_all_blocks=1 00:06:57.003 --rc geninfo_unexecuted_blocks=1 00:06:57.003 00:06:57.003 ' 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:57.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.003 --rc genhtml_branch_coverage=1 00:06:57.003 --rc genhtml_function_coverage=1 00:06:57.003 --rc genhtml_legend=1 00:06:57.003 --rc geninfo_all_blocks=1 00:06:57.003 --rc geninfo_unexecuted_blocks=1 00:06:57.003 00:06:57.003 ' 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:57.003 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:57.003 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:57.004 Cannot find device "nvmf_init_br" 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:57.004 Cannot find device "nvmf_init_br2" 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:57.004 Cannot find device "nvmf_tgt_br" 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:57.004 Cannot find device "nvmf_tgt_br2" 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:57.004 Cannot find device "nvmf_init_br" 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:57.004 Cannot find device "nvmf_init_br2" 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:57.004 Cannot find device "nvmf_tgt_br" 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:57.004 Cannot find device "nvmf_tgt_br2" 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:06:57.004 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:57.263 Cannot find device "nvmf_br" 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:57.263 Cannot find device "nvmf_init_if" 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:57.263 Cannot find device "nvmf_init_if2" 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:57.263 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:57.263 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:57.263 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:57.521 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:57.521 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:57.521 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:57.521 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:57.521 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:57.521 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:57.521 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:57.521 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:57.521 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:57.521 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:57.521 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:06:57.521 00:06:57.521 --- 10.0.0.3 ping statistics --- 00:06:57.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.521 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:06:57.521 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:57.521 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:57.521 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:06:57.521 00:06:57.521 --- 10.0.0.4 ping statistics --- 00:06:57.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.521 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:06:57.521 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:57.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:57.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:06:57.521 00:06:57.522 --- 10.0.0.1 ping statistics --- 00:06:57.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.522 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:06:57.522 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:57.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:57.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:06:57.522 00:06:57.522 --- 10.0.0.2 ping statistics --- 00:06:57.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.522 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:06:57.522 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:57.522 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:06:57.522 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:57.522 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:57.522 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:57.522 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:57.522 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:57.522 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:57.522 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:57.522 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:57.522 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:57.522 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:57.522 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:57.522 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:57.522 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:57.522 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62398 00:06:57.522 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:57.522 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62398 00:06:57.522 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62398 ']' 00:06:57.522 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.522 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.522 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.522 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.522 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:57.522 [2024-11-26 20:27:57.806888] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:57.522 [2024-11-26 20:27:57.806989] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:57.781 [2024-11-26 20:27:57.965116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:57.781 [2024-11-26 20:27:58.057410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:57.782 [2024-11-26 20:27:58.057486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:57.783 [2024-11-26 20:27:58.057501] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:57.783 [2024-11-26 20:27:58.057512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:57.783 [2024-11-26 20:27:58.057522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:57.783 [2024-11-26 20:27:58.058960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.783 [2024-11-26 20:27:58.059077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.783 [2024-11-26 20:27:58.059191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:57.783 [2024-11-26 20:27:58.059193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.043 [2024-11-26 20:27:58.136964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.610 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.610 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:58.610 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:58.610 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:58.610 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.610 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:58.610 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:58.610 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.610 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.610 [2024-11-26 20:27:58.923106] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.610 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.610 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:58.610 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:58.610 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.610 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:58.610 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:58.610 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:58.610 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.610 20:27:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.869 Malloc0 00:06:58.869 [2024-11-26 20:27:59.010091] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:58.869 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.869 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:58.869 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:58.869 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.869 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62452 00:06:58.869 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62452 /var/tmp/bdevperf.sock 00:06:58.869 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62452 ']' 00:06:58.869 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:58.869 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:58.869 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:58.869 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:58.869 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.869 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:58.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:58.869 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:58.869 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:58.869 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.869 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:58.869 { 00:06:58.869 "params": { 00:06:58.869 "name": "Nvme$subsystem", 00:06:58.869 "trtype": "$TEST_TRANSPORT", 00:06:58.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:58.869 "adrfam": "ipv4", 00:06:58.869 "trsvcid": "$NVMF_PORT", 00:06:58.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:58.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:58.869 "hdgst": ${hdgst:-false}, 00:06:58.869 "ddgst": ${ddgst:-false} 00:06:58.869 }, 00:06:58.869 "method": "bdev_nvme_attach_controller" 00:06:58.869 } 00:06:58.869 EOF 00:06:58.869 )") 00:06:58.869 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.869 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:58.869 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:58.869 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:58.869 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:58.869 "params": { 00:06:58.869 "name": "Nvme0", 00:06:58.869 "trtype": "tcp", 00:06:58.869 "traddr": "10.0.0.3", 00:06:58.869 "adrfam": "ipv4", 00:06:58.869 "trsvcid": "4420", 00:06:58.869 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:58.869 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:58.869 "hdgst": false, 00:06:58.869 "ddgst": false 00:06:58.869 }, 00:06:58.869 "method": "bdev_nvme_attach_controller" 00:06:58.869 }' 00:06:58.869 [2024-11-26 20:27:59.116172] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:06:58.869 [2024-11-26 20:27:59.116283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62452 ] 00:06:59.127 [2024-11-26 20:27:59.264989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.127 [2024-11-26 20:27:59.334612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.127 [2024-11-26 20:27:59.403491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.386 Running I/O for 10 seconds... 00:06:59.386 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.386 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:59.386 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:59.386 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.386 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.386 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.386 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:59.386 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:59.386 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:59.386 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:59.386 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:59.386 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:59.386 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:59.386 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:59.386 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:59.386 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.386 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.386 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:59.386 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.386 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:06:59.386 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:06:59.386 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:59.646 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:59.646 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:59.646 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:59.646 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:59.646 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.646 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.646 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.646 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:06:59.646 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:06:59.647 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:59.647 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:59.647 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:59.647 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:59.647 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.647 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.647 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.647 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:59.647 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.647 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.647 [2024-11-26 20:27:59.979990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.647 [2024-11-26 20:27:59.980850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.647 [2024-11-26 20:27:59.980859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.980870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.980880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.980891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.980901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.980912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.980921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.980932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.980941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.980952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.980961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.980972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.980981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.980992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.981002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.981013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.981022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.981033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.981043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.981054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.981063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.981089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.981112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.981123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.981132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.981148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.981157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.981170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.981179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.981190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.981199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.981211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.981229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.981243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.981252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.981263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.981273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.981284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.981293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.981304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.981313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.981324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.981334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.981345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.981355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.981366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.981375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.981386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.981395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.981406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.981415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.981426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.981436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.981452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.648 [2024-11-26 20:27:59.981461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.981471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf072d0 is same with the state(6) to be set 00:06:59.648 [2024-11-26 20:27:59.981675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.648 [2024-11-26 20:27:59.981693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.981704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.648 [2024-11-26 20:27:59.981714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.981724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.648 [2024-11-26 20:27:59.981733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.981743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.648 [2024-11-26 20:27:59.981752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:59.648 [2024-11-26 20:27:59.981761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0cce0 is same with the state(6) to be set 00:06:59.648 [2024-11-26 20:27:59.982849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:59.648 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.648 20:27:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:59.648 task offset: 81920 on job bdev=Nvme0n1 fails 00:06:59.648 00:06:59.648 Latency(us) 00:06:59.648 [2024-11-26T20:28:00.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:59.648 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:59.648 Job: Nvme0n1 ended in about 0.46 seconds with error 00:06:59.648 Verification LBA range: start 0x0 length 0x400 00:06:59.648 Nvme0n1 : 0.46 1405.44 87.84 140.54 0.00 39763.50 2204.39 46947.61 00:06:59.648 [2024-11-26T20:28:00.003Z] =================================================================================================================== 00:06:59.648 [2024-11-26T20:28:00.003Z] Total : 1405.44 87.84 140.54 0.00 39763.50 2204.39 46947.61 00:06:59.648 [2024-11-26 20:27:59.984888] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.648 [2024-11-26 20:27:59.984922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf0cce0 (9): Bad file descriptor 00:06:59.648 [2024-11-26 20:27:59.993358] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:01.019 20:28:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62452 00:07:01.019 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62452) - No such process 00:07:01.019 20:28:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:01.019 20:28:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:01.019 20:28:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:01.019 20:28:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:01.019 20:28:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:01.019 20:28:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:01.019 20:28:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:01.019 20:28:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:01.019 { 00:07:01.019 "params": { 00:07:01.019 "name": "Nvme$subsystem", 00:07:01.019 "trtype": "$TEST_TRANSPORT", 00:07:01.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:01.019 "adrfam": "ipv4", 00:07:01.019 "trsvcid": "$NVMF_PORT", 00:07:01.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:01.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:01.019 "hdgst": ${hdgst:-false}, 00:07:01.019 "ddgst": ${ddgst:-false} 00:07:01.019 }, 00:07:01.019 "method": "bdev_nvme_attach_controller" 00:07:01.019 } 00:07:01.019 EOF 00:07:01.019 )") 00:07:01.019 20:28:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:01.019 20:28:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:01.019 20:28:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:01.019 20:28:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:01.019 "params": { 00:07:01.019 "name": "Nvme0", 00:07:01.019 "trtype": "tcp", 00:07:01.019 "traddr": "10.0.0.3", 00:07:01.019 "adrfam": "ipv4", 00:07:01.019 "trsvcid": "4420", 00:07:01.019 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:01.019 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:01.019 "hdgst": false, 00:07:01.019 "ddgst": false 00:07:01.019 }, 00:07:01.019 "method": "bdev_nvme_attach_controller" 00:07:01.019 }' 00:07:01.019 [2024-11-26 20:28:01.041916] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:07:01.020 [2024-11-26 20:28:01.042001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62492 ] 00:07:01.020 [2024-11-26 20:28:01.185881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.020 [2024-11-26 20:28:01.242508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.020 [2024-11-26 20:28:01.305722] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.283 Running I/O for 1 seconds... 00:07:02.223 1472.00 IOPS, 92.00 MiB/s 00:07:02.223 Latency(us) 00:07:02.223 [2024-11-26T20:28:02.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:02.223 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:02.223 Verification LBA range: start 0x0 length 0x400 00:07:02.223 Nvme0n1 : 1.01 1523.30 95.21 0.00 0.00 41191.81 4230.05 39083.29 00:07:02.223 [2024-11-26T20:28:02.578Z] =================================================================================================================== 00:07:02.223 [2024-11-26T20:28:02.578Z] Total : 1523.30 95.21 0.00 0.00 41191.81 4230.05 39083.29 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:02.480 rmmod nvme_tcp 00:07:02.480 rmmod nvme_fabrics 00:07:02.480 rmmod nvme_keyring 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62398 ']' 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62398 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62398 ']' 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62398 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62398 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:02.480 killing process with pid 62398 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62398' 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62398 00:07:02.480 20:28:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62398 00:07:02.738 [2024-11-26 20:28:03.088107] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:02.996 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:03.254 00:07:03.254 real 0m6.313s 00:07:03.254 user 0m22.762s 00:07:03.254 sys 0m1.609s 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.254 ************************************ 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:03.254 END TEST nvmf_host_management 00:07:03.254 ************************************ 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:03.254 ************************************ 00:07:03.254 START TEST nvmf_lvol 00:07:03.254 ************************************ 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:03.254 * Looking for test storage... 00:07:03.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:03.254 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:03.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.255 --rc genhtml_branch_coverage=1 00:07:03.255 --rc genhtml_function_coverage=1 00:07:03.255 --rc genhtml_legend=1 00:07:03.255 --rc geninfo_all_blocks=1 00:07:03.255 --rc geninfo_unexecuted_blocks=1 00:07:03.255 00:07:03.255 ' 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:03.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.255 --rc genhtml_branch_coverage=1 00:07:03.255 --rc genhtml_function_coverage=1 00:07:03.255 --rc genhtml_legend=1 00:07:03.255 --rc geninfo_all_blocks=1 00:07:03.255 --rc geninfo_unexecuted_blocks=1 00:07:03.255 00:07:03.255 ' 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:03.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.255 --rc genhtml_branch_coverage=1 00:07:03.255 --rc genhtml_function_coverage=1 00:07:03.255 --rc genhtml_legend=1 00:07:03.255 --rc geninfo_all_blocks=1 00:07:03.255 --rc geninfo_unexecuted_blocks=1 00:07:03.255 00:07:03.255 ' 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:03.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.255 --rc genhtml_branch_coverage=1 00:07:03.255 --rc genhtml_function_coverage=1 00:07:03.255 --rc genhtml_legend=1 00:07:03.255 --rc geninfo_all_blocks=1 00:07:03.255 --rc geninfo_unexecuted_blocks=1 00:07:03.255 00:07:03.255 ' 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:03.255 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:03.514 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:03.514 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:03.515 Cannot find device "nvmf_init_br" 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:03.515 Cannot find device "nvmf_init_br2" 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:03.515 Cannot find device "nvmf_tgt_br" 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:03.515 Cannot find device "nvmf_tgt_br2" 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:03.515 Cannot find device "nvmf_init_br" 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:03.515 Cannot find device "nvmf_init_br2" 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:03.515 Cannot find device "nvmf_tgt_br" 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:03.515 Cannot find device "nvmf_tgt_br2" 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:03.515 Cannot find device "nvmf_br" 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:03.515 Cannot find device "nvmf_init_if" 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:03.515 Cannot find device "nvmf_init_if2" 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:03.515 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:03.515 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:03.515 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:03.774 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:03.774 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:03.774 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:03.774 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:03.774 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:03.774 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:03.774 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:03.774 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:03.774 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:03.774 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:03.774 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:03.774 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:03.774 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:03.774 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:03.774 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:03.774 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:03.774 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:03.774 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:03.774 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:03.774 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:03.774 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:03.774 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:03.774 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 00:07:03.774 00:07:03.774 --- 10.0.0.3 ping statistics --- 00:07:03.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.774 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:07:03.774 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:03.774 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:03.774 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.095 ms 00:07:03.774 00:07:03.774 --- 10.0.0.4 ping statistics --- 00:07:03.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.774 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:07:03.774 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:03.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:03.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:07:03.774 00:07:03.774 --- 10.0.0.1 ping statistics --- 00:07:03.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.774 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:07:03.774 20:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:03.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:03.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:07:03.774 00:07:03.774 --- 10.0.0.2 ping statistics --- 00:07:03.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.774 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:07:03.774 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:03.774 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:07:03.774 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:03.774 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:03.774 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:03.774 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:03.774 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:03.774 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:03.774 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:03.774 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:03.774 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:03.774 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:03.774 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:03.774 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62763 00:07:03.774 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62763 00:07:03.774 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 62763 ']' 00:07:03.774 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.774 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.774 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.774 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:03.774 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.774 20:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:03.774 [2024-11-26 20:28:04.088641] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:07:03.774 [2024-11-26 20:28:04.088728] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.032 [2024-11-26 20:28:04.235186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.032 [2024-11-26 20:28:04.293308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:04.032 [2024-11-26 20:28:04.293599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:04.032 [2024-11-26 20:28:04.293846] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:04.032 [2024-11-26 20:28:04.294001] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:04.032 [2024-11-26 20:28:04.294198] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:04.032 [2024-11-26 20:28:04.295575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.032 [2024-11-26 20:28:04.295622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.032 [2024-11-26 20:28:04.295627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.032 [2024-11-26 20:28:04.366351] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.967 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.967 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:04.967 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:04.967 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:04.967 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:04.967 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:04.967 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:05.229 [2024-11-26 20:28:05.413016] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.229 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:05.489 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:05.489 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:05.748 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:05.748 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:06.007 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:06.403 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=eb980189-be47-4859-82fe-1cb14849eb5c 00:07:06.404 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u eb980189-be47-4859-82fe-1cb14849eb5c lvol 20 00:07:06.662 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ea8eba25-8d78-413b-b16f-f5f95aa32cd8 00:07:06.662 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:06.920 20:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ea8eba25-8d78-413b-b16f-f5f95aa32cd8 00:07:07.239 20:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:07.501 [2024-11-26 20:28:07.765956] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:07.501 20:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:07.760 20:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62844 00:07:07.760 20:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:07.760 20:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:08.697 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot ea8eba25-8d78-413b-b16f-f5f95aa32cd8 MY_SNAPSHOT 00:07:09.265 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=90d5046a-159d-4b56-acd1-44b02cea4a48 00:07:09.265 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize ea8eba25-8d78-413b-b16f-f5f95aa32cd8 30 00:07:09.524 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 90d5046a-159d-4b56-acd1-44b02cea4a48 MY_CLONE 00:07:09.783 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b8890b80-2019-4432-bd29-8b7686a34a2f 00:07:09.783 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate b8890b80-2019-4432-bd29-8b7686a34a2f 00:07:10.361 20:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62844 00:07:18.628 Initializing NVMe Controllers 00:07:18.628 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:07:18.628 Controller IO queue size 128, less than required. 00:07:18.628 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:18.628 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:18.628 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:18.628 Initialization complete. Launching workers. 00:07:18.628 ======================================================== 00:07:18.628 Latency(us) 00:07:18.628 Device Information : IOPS MiB/s Average min max 00:07:18.628 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10166.50 39.71 12590.59 2624.30 65232.60 00:07:18.628 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10120.10 39.53 12653.13 3326.29 62520.90 00:07:18.628 ======================================================== 00:07:18.628 Total : 20286.59 79.24 12621.79 2624.30 65232.60 00:07:18.628 00:07:18.628 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:18.628 20:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ea8eba25-8d78-413b-b16f-f5f95aa32cd8 00:07:18.915 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eb980189-be47-4859-82fe-1cb14849eb5c 00:07:19.174 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:19.174 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:19.174 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:19.174 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:19.174 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:19.174 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:19.174 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:19.174 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:19.174 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:19.174 rmmod nvme_tcp 00:07:19.174 rmmod nvme_fabrics 00:07:19.174 rmmod nvme_keyring 00:07:19.174 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:19.174 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:19.174 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:19.174 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62763 ']' 00:07:19.174 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62763 00:07:19.174 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 62763 ']' 00:07:19.174 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 62763 00:07:19.174 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:19.174 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.174 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62763 00:07:19.174 killing process with pid 62763 00:07:19.174 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.174 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.174 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62763' 00:07:19.174 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 62763 00:07:19.174 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 62763 00:07:19.432 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:19.432 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:19.432 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:19.432 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:19.432 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:19.432 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:19.432 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:19.432 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:19.432 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:19.432 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:19.432 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:19.432 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:19.432 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:19.432 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:19.432 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:19.432 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:19.432 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:19.690 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:19.690 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:19.690 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:19.690 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:19.690 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:19.690 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:19.690 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.690 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:19.690 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.690 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:07:19.690 00:07:19.690 real 0m16.546s 00:07:19.690 user 1m7.540s 00:07:19.690 sys 0m4.267s 00:07:19.691 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.691 20:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:19.691 ************************************ 00:07:19.691 END TEST nvmf_lvol 00:07:19.691 ************************************ 00:07:19.691 20:28:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:19.691 20:28:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:19.691 20:28:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.691 20:28:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:19.691 ************************************ 00:07:19.691 START TEST nvmf_lvs_grow 00:07:19.691 ************************************ 00:07:19.691 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:19.950 * Looking for test storage... 00:07:19.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:19.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.950 --rc genhtml_branch_coverage=1 00:07:19.950 --rc genhtml_function_coverage=1 00:07:19.950 --rc genhtml_legend=1 00:07:19.950 --rc geninfo_all_blocks=1 00:07:19.950 --rc geninfo_unexecuted_blocks=1 00:07:19.950 00:07:19.950 ' 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:19.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.950 --rc genhtml_branch_coverage=1 00:07:19.950 --rc genhtml_function_coverage=1 00:07:19.950 --rc genhtml_legend=1 00:07:19.950 --rc geninfo_all_blocks=1 00:07:19.950 --rc geninfo_unexecuted_blocks=1 00:07:19.950 00:07:19.950 ' 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:19.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.950 --rc genhtml_branch_coverage=1 00:07:19.950 --rc genhtml_function_coverage=1 00:07:19.950 --rc genhtml_legend=1 00:07:19.950 --rc geninfo_all_blocks=1 00:07:19.950 --rc geninfo_unexecuted_blocks=1 00:07:19.950 00:07:19.950 ' 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:19.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.950 --rc genhtml_branch_coverage=1 00:07:19.950 --rc genhtml_function_coverage=1 00:07:19.950 --rc genhtml_legend=1 00:07:19.950 --rc geninfo_all_blocks=1 00:07:19.950 --rc geninfo_unexecuted_blocks=1 00:07:19.950 00:07:19.950 ' 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:19.950 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:19.951 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:19.951 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:19.952 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:19.952 Cannot find device "nvmf_init_br" 00:07:19.952 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:19.952 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:19.952 Cannot find device "nvmf_init_br2" 00:07:19.952 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:19.952 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:19.952 Cannot find device "nvmf_tgt_br" 00:07:19.952 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:07:19.952 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:19.952 Cannot find device "nvmf_tgt_br2" 00:07:19.952 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:07:19.952 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:20.210 Cannot find device "nvmf_init_br" 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:20.210 Cannot find device "nvmf_init_br2" 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:20.210 Cannot find device "nvmf_tgt_br" 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:20.210 Cannot find device "nvmf_tgt_br2" 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:20.210 Cannot find device "nvmf_br" 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:20.210 Cannot find device "nvmf_init_if" 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:20.210 Cannot find device "nvmf_init_if2" 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:20.210 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:20.210 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:20.210 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:20.211 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:20.211 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:20.211 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:20.211 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:20.211 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:20.211 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:20.211 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:20.211 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:20.211 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:20.211 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:20.211 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:20.211 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:20.211 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:20.211 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:20.470 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:20.470 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:07:20.470 00:07:20.470 --- 10.0.0.3 ping statistics --- 00:07:20.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.470 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:20.470 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:20.470 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:07:20.470 00:07:20.470 --- 10.0.0.4 ping statistics --- 00:07:20.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.470 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:20.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:20.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:07:20.470 00:07:20.470 --- 10.0.0.1 ping statistics --- 00:07:20.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.470 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:20.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:20.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:07:20.470 00:07:20.470 --- 10.0.0.2 ping statistics --- 00:07:20.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.470 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63235 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63235 00:07:20.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63235 ']' 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.470 20:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:20.470 [2024-11-26 20:28:20.731869] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:07:20.470 [2024-11-26 20:28:20.732267] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.729 [2024-11-26 20:28:20.887543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.729 [2024-11-26 20:28:20.954364] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:20.729 [2024-11-26 20:28:20.954433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:20.729 [2024-11-26 20:28:20.954449] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:20.729 [2024-11-26 20:28:20.954459] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:20.729 [2024-11-26 20:28:20.954468] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:20.729 [2024-11-26 20:28:20.954955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.729 [2024-11-26 20:28:21.013696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.988 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.988 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:20.988 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:20.988 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:20.988 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:20.988 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:20.988 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:21.246 [2024-11-26 20:28:21.414141] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.246 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:21.246 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.246 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.246 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:21.246 ************************************ 00:07:21.246 START TEST lvs_grow_clean 00:07:21.246 ************************************ 00:07:21.246 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:21.246 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:21.246 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:21.246 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:21.246 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:21.246 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:21.246 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:21.246 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:21.246 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:21.246 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:21.503 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:21.503 20:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:22.069 20:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=129e54fa-a89b-4e1a-ac72-466e8e612cf1 00:07:22.069 20:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 129e54fa-a89b-4e1a-ac72-466e8e612cf1 00:07:22.069 20:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:22.327 20:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:22.327 20:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:22.327 20:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 129e54fa-a89b-4e1a-ac72-466e8e612cf1 lvol 150 00:07:22.591 20:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3520de07-f757-4b5b-9dc0-d843817ae3ea 00:07:22.591 20:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:22.591 20:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:22.850 [2024-11-26 20:28:23.028336] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:22.850 [2024-11-26 20:28:23.028548] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:22.850 true 00:07:22.850 20:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:22.850 20:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 129e54fa-a89b-4e1a-ac72-466e8e612cf1 00:07:23.108 20:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:23.108 20:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:23.367 20:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3520de07-f757-4b5b-9dc0-d843817ae3ea 00:07:23.627 20:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:23.887 [2024-11-26 20:28:24.129148] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:23.887 20:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:24.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:24.145 20:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63310 00:07:24.145 20:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:24.145 20:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:24.145 20:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63310 /var/tmp/bdevperf.sock 00:07:24.145 20:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63310 ']' 00:07:24.145 20:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:24.145 20:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.145 20:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:24.145 20:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.145 20:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:24.145 [2024-11-26 20:28:24.489593] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:07:24.145 [2024-11-26 20:28:24.489683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63310 ] 00:07:24.404 [2024-11-26 20:28:24.643171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.404 [2024-11-26 20:28:24.717941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.663 [2024-11-26 20:28:24.777194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.232 20:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.232 20:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:25.232 20:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:25.491 Nvme0n1 00:07:25.491 20:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:26.060 [ 00:07:26.060 { 00:07:26.060 "name": "Nvme0n1", 00:07:26.060 "aliases": [ 00:07:26.060 "3520de07-f757-4b5b-9dc0-d843817ae3ea" 00:07:26.060 ], 00:07:26.060 "product_name": "NVMe disk", 00:07:26.060 "block_size": 4096, 00:07:26.060 "num_blocks": 38912, 00:07:26.060 "uuid": "3520de07-f757-4b5b-9dc0-d843817ae3ea", 00:07:26.060 "numa_id": -1, 00:07:26.060 "assigned_rate_limits": { 00:07:26.060 "rw_ios_per_sec": 0, 00:07:26.060 "rw_mbytes_per_sec": 0, 00:07:26.060 "r_mbytes_per_sec": 0, 00:07:26.060 "w_mbytes_per_sec": 0 00:07:26.060 }, 00:07:26.060 "claimed": false, 00:07:26.060 "zoned": false, 00:07:26.060 "supported_io_types": { 00:07:26.060 "read": true, 00:07:26.060 "write": true, 00:07:26.060 "unmap": true, 00:07:26.060 "flush": true, 00:07:26.060 "reset": true, 00:07:26.060 "nvme_admin": true, 00:07:26.060 "nvme_io": true, 00:07:26.060 "nvme_io_md": false, 00:07:26.060 "write_zeroes": true, 00:07:26.060 "zcopy": false, 00:07:26.060 "get_zone_info": false, 00:07:26.060 "zone_management": false, 00:07:26.060 "zone_append": false, 00:07:26.060 "compare": true, 00:07:26.060 "compare_and_write": true, 00:07:26.060 "abort": true, 00:07:26.060 "seek_hole": false, 00:07:26.060 "seek_data": false, 00:07:26.060 "copy": true, 00:07:26.060 "nvme_iov_md": false 00:07:26.060 }, 00:07:26.060 "memory_domains": [ 00:07:26.060 { 00:07:26.060 "dma_device_id": "system", 00:07:26.060 "dma_device_type": 1 00:07:26.060 } 00:07:26.060 ], 00:07:26.060 "driver_specific": { 00:07:26.060 "nvme": [ 00:07:26.060 { 00:07:26.060 "trid": { 00:07:26.060 "trtype": "TCP", 00:07:26.060 "adrfam": "IPv4", 00:07:26.060 "traddr": "10.0.0.3", 00:07:26.060 "trsvcid": "4420", 00:07:26.060 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:26.060 }, 00:07:26.060 "ctrlr_data": { 00:07:26.060 "cntlid": 1, 00:07:26.060 "vendor_id": "0x8086", 00:07:26.060 "model_number": "SPDK bdev Controller", 00:07:26.060 "serial_number": "SPDK0", 00:07:26.060 "firmware_revision": "25.01", 00:07:26.060 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:26.060 "oacs": { 00:07:26.060 "security": 0, 00:07:26.060 "format": 0, 00:07:26.060 "firmware": 0, 00:07:26.060 "ns_manage": 0 00:07:26.060 }, 00:07:26.060 "multi_ctrlr": true, 00:07:26.060 "ana_reporting": false 00:07:26.060 }, 00:07:26.060 "vs": { 00:07:26.060 "nvme_version": "1.3" 00:07:26.060 }, 00:07:26.060 "ns_data": { 00:07:26.060 "id": 1, 00:07:26.060 "can_share": true 00:07:26.060 } 00:07:26.060 } 00:07:26.060 ], 00:07:26.060 "mp_policy": "active_passive" 00:07:26.060 } 00:07:26.060 } 00:07:26.060 ] 00:07:26.060 20:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:26.060 20:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63339 00:07:26.060 20:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:26.060 Running I/O for 10 seconds... 00:07:27.022 Latency(us) 00:07:27.022 [2024-11-26T20:28:27.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:27.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.022 Nvme0n1 : 1.00 6306.00 24.63 0.00 0.00 0.00 0.00 0.00 00:07:27.022 [2024-11-26T20:28:27.377Z] =================================================================================================================== 00:07:27.022 [2024-11-26T20:28:27.377Z] Total : 6306.00 24.63 0.00 0.00 0.00 0.00 0.00 00:07:27.022 00:07:27.971 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 129e54fa-a89b-4e1a-ac72-466e8e612cf1 00:07:27.971 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.971 Nvme0n1 : 2.00 6264.50 24.47 0.00 0.00 0.00 0.00 0.00 00:07:27.971 [2024-11-26T20:28:28.326Z] =================================================================================================================== 00:07:27.971 [2024-11-26T20:28:28.326Z] Total : 6264.50 24.47 0.00 0.00 0.00 0.00 0.00 00:07:27.971 00:07:28.230 true 00:07:28.230 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:28.230 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 129e54fa-a89b-4e1a-ac72-466e8e612cf1 00:07:28.488 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:28.488 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:28.488 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63339 00:07:29.054 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.054 Nvme0n1 : 3.00 6377.67 24.91 0.00 0.00 0.00 0.00 0.00 00:07:29.054 [2024-11-26T20:28:29.409Z] =================================================================================================================== 00:07:29.054 [2024-11-26T20:28:29.409Z] Total : 6377.67 24.91 0.00 0.00 0.00 0.00 0.00 00:07:29.054 00:07:29.989 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.989 Nvme0n1 : 4.00 6358.25 24.84 0.00 0.00 0.00 0.00 0.00 00:07:29.989 [2024-11-26T20:28:30.344Z] =================================================================================================================== 00:07:29.989 [2024-11-26T20:28:30.344Z] Total : 6358.25 24.84 0.00 0.00 0.00 0.00 0.00 00:07:29.989 00:07:30.924 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.924 Nvme0n1 : 5.00 6274.60 24.51 0.00 0.00 0.00 0.00 0.00 00:07:30.924 [2024-11-26T20:28:31.279Z] =================================================================================================================== 00:07:30.924 [2024-11-26T20:28:31.279Z] Total : 6274.60 24.51 0.00 0.00 0.00 0.00 0.00 00:07:30.924 00:07:32.307 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.307 Nvme0n1 : 6.00 6244.83 24.39 0.00 0.00 0.00 0.00 0.00 00:07:32.307 [2024-11-26T20:28:32.662Z] =================================================================================================================== 00:07:32.307 [2024-11-26T20:28:32.662Z] Total : 6244.83 24.39 0.00 0.00 0.00 0.00 0.00 00:07:32.307 00:07:33.243 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.243 Nvme0n1 : 7.00 6223.57 24.31 0.00 0.00 0.00 0.00 0.00 00:07:33.243 [2024-11-26T20:28:33.598Z] =================================================================================================================== 00:07:33.243 [2024-11-26T20:28:33.598Z] Total : 6223.57 24.31 0.00 0.00 0.00 0.00 0.00 00:07:33.243 00:07:34.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.178 Nvme0n1 : 8.00 6191.75 24.19 0.00 0.00 0.00 0.00 0.00 00:07:34.178 [2024-11-26T20:28:34.533Z] =================================================================================================================== 00:07:34.178 [2024-11-26T20:28:34.533Z] Total : 6191.75 24.19 0.00 0.00 0.00 0.00 0.00 00:07:34.178 00:07:35.122 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.122 Nvme0n1 : 9.00 6152.89 24.03 0.00 0.00 0.00 0.00 0.00 00:07:35.122 [2024-11-26T20:28:35.477Z] =================================================================================================================== 00:07:35.122 [2024-11-26T20:28:35.477Z] Total : 6152.89 24.03 0.00 0.00 0.00 0.00 0.00 00:07:35.122 00:07:36.119 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.119 Nvme0n1 : 10.00 6223.40 24.31 0.00 0.00 0.00 0.00 0.00 00:07:36.119 [2024-11-26T20:28:36.474Z] =================================================================================================================== 00:07:36.119 [2024-11-26T20:28:36.474Z] Total : 6223.40 24.31 0.00 0.00 0.00 0.00 0.00 00:07:36.119 00:07:36.119 00:07:36.119 Latency(us) 00:07:36.119 [2024-11-26T20:28:36.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.119 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.119 Nvme0n1 : 10.01 6229.45 24.33 0.00 0.00 20540.37 14775.39 93895.21 00:07:36.119 [2024-11-26T20:28:36.474Z] =================================================================================================================== 00:07:36.119 [2024-11-26T20:28:36.474Z] Total : 6229.45 24.33 0.00 0.00 20540.37 14775.39 93895.21 00:07:36.119 { 00:07:36.119 "results": [ 00:07:36.119 { 00:07:36.119 "job": "Nvme0n1", 00:07:36.119 "core_mask": "0x2", 00:07:36.119 "workload": "randwrite", 00:07:36.119 "status": "finished", 00:07:36.119 "queue_depth": 128, 00:07:36.119 "io_size": 4096, 00:07:36.119 "runtime": 10.010834, 00:07:36.119 "iops": 6229.451012772762, 00:07:36.119 "mibps": 24.3337930186436, 00:07:36.119 "io_failed": 0, 00:07:36.119 "io_timeout": 0, 00:07:36.119 "avg_latency_us": 20540.370653224138, 00:07:36.119 "min_latency_us": 14775.389090909091, 00:07:36.119 "max_latency_us": 93895.21454545454 00:07:36.119 } 00:07:36.119 ], 00:07:36.119 "core_count": 1 00:07:36.119 } 00:07:36.119 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63310 00:07:36.119 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63310 ']' 00:07:36.119 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63310 00:07:36.119 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:36.119 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.119 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63310 00:07:36.119 killing process with pid 63310 00:07:36.119 Received shutdown signal, test time was about 10.000000 seconds 00:07:36.119 00:07:36.119 Latency(us) 00:07:36.119 [2024-11-26T20:28:36.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.119 [2024-11-26T20:28:36.474Z] =================================================================================================================== 00:07:36.119 [2024-11-26T20:28:36.474Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:36.119 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:36.119 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:36.119 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63310' 00:07:36.119 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63310 00:07:36.119 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63310 00:07:36.378 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:36.635 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:36.894 20:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:36.894 20:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 129e54fa-a89b-4e1a-ac72-466e8e612cf1 00:07:37.151 20:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:37.151 20:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:37.151 20:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:37.408 [2024-11-26 20:28:37.739896] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:37.666 20:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 129e54fa-a89b-4e1a-ac72-466e8e612cf1 00:07:37.666 20:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:37.666 20:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 129e54fa-a89b-4e1a-ac72-466e8e612cf1 00:07:37.666 20:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:37.666 20:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.666 20:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:37.666 20:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.666 20:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:37.666 20:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.666 20:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:37.666 20:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:37.666 20:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 129e54fa-a89b-4e1a-ac72-466e8e612cf1 00:07:37.925 request: 00:07:37.925 { 00:07:37.925 "uuid": "129e54fa-a89b-4e1a-ac72-466e8e612cf1", 00:07:37.925 "method": "bdev_lvol_get_lvstores", 00:07:37.925 "req_id": 1 00:07:37.925 } 00:07:37.925 Got JSON-RPC error response 00:07:37.925 response: 00:07:37.925 { 00:07:37.925 "code": -19, 00:07:37.925 "message": "No such device" 00:07:37.925 } 00:07:37.925 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:37.925 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:37.925 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:37.925 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:37.925 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:38.184 aio_bdev 00:07:38.184 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3520de07-f757-4b5b-9dc0-d843817ae3ea 00:07:38.184 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=3520de07-f757-4b5b-9dc0-d843817ae3ea 00:07:38.184 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:38.184 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:38.184 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:38.184 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:38.184 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:38.442 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3520de07-f757-4b5b-9dc0-d843817ae3ea -t 2000 00:07:38.702 [ 00:07:38.702 { 00:07:38.702 "name": "3520de07-f757-4b5b-9dc0-d843817ae3ea", 00:07:38.702 "aliases": [ 00:07:38.702 "lvs/lvol" 00:07:38.702 ], 00:07:38.702 "product_name": "Logical Volume", 00:07:38.702 "block_size": 4096, 00:07:38.702 "num_blocks": 38912, 00:07:38.702 "uuid": "3520de07-f757-4b5b-9dc0-d843817ae3ea", 00:07:38.702 "assigned_rate_limits": { 00:07:38.702 "rw_ios_per_sec": 0, 00:07:38.702 "rw_mbytes_per_sec": 0, 00:07:38.702 "r_mbytes_per_sec": 0, 00:07:38.702 "w_mbytes_per_sec": 0 00:07:38.702 }, 00:07:38.702 "claimed": false, 00:07:38.702 "zoned": false, 00:07:38.702 "supported_io_types": { 00:07:38.702 "read": true, 00:07:38.702 "write": true, 00:07:38.702 "unmap": true, 00:07:38.702 "flush": false, 00:07:38.702 "reset": true, 00:07:38.702 "nvme_admin": false, 00:07:38.702 "nvme_io": false, 00:07:38.702 "nvme_io_md": false, 00:07:38.702 "write_zeroes": true, 00:07:38.702 "zcopy": false, 00:07:38.702 "get_zone_info": false, 00:07:38.702 "zone_management": false, 00:07:38.702 "zone_append": false, 00:07:38.702 "compare": false, 00:07:38.702 "compare_and_write": false, 00:07:38.702 "abort": false, 00:07:38.702 "seek_hole": true, 00:07:38.702 "seek_data": true, 00:07:38.702 "copy": false, 00:07:38.702 "nvme_iov_md": false 00:07:38.702 }, 00:07:38.702 "driver_specific": { 00:07:38.702 "lvol": { 00:07:38.702 "lvol_store_uuid": "129e54fa-a89b-4e1a-ac72-466e8e612cf1", 00:07:38.702 "base_bdev": "aio_bdev", 00:07:38.702 "thin_provision": false, 00:07:38.702 "num_allocated_clusters": 38, 00:07:38.702 "snapshot": false, 00:07:38.702 "clone": false, 00:07:38.702 "esnap_clone": false 00:07:38.702 } 00:07:38.702 } 00:07:38.702 } 00:07:38.702 ] 00:07:38.702 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:38.702 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:38.702 20:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 129e54fa-a89b-4e1a-ac72-466e8e612cf1 00:07:38.961 20:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:38.961 20:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 129e54fa-a89b-4e1a-ac72-466e8e612cf1 00:07:38.961 20:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:39.219 20:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:39.219 20:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3520de07-f757-4b5b-9dc0-d843817ae3ea 00:07:39.785 20:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 129e54fa-a89b-4e1a-ac72-466e8e612cf1 00:07:40.043 20:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:40.303 20:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:40.562 ************************************ 00:07:40.562 END TEST lvs_grow_clean 00:07:40.562 ************************************ 00:07:40.562 00:07:40.562 real 0m19.408s 00:07:40.562 user 0m18.356s 00:07:40.562 sys 0m2.623s 00:07:40.562 20:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.562 20:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:40.562 20:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:40.562 20:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:40.562 20:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.562 20:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:40.562 ************************************ 00:07:40.562 START TEST lvs_grow_dirty 00:07:40.562 ************************************ 00:07:40.562 20:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:40.562 20:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:40.562 20:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:40.562 20:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:40.562 20:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:40.562 20:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:40.562 20:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:40.562 20:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:40.562 20:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:40.821 20:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:41.080 20:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:41.080 20:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:41.339 20:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=cd1013b8-a054-4ceb-b304-17fa62e41f7d 00:07:41.339 20:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd1013b8-a054-4ceb-b304-17fa62e41f7d 00:07:41.339 20:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:41.598 20:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:41.598 20:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:41.598 20:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u cd1013b8-a054-4ceb-b304-17fa62e41f7d lvol 150 00:07:41.857 20:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=bf33ee48-55b8-41b6-ac50-3b5dd14d87fa 00:07:41.857 20:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:41.857 20:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:42.116 [2024-11-26 20:28:42.372219] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:42.116 [2024-11-26 20:28:42.372548] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:42.116 true 00:07:42.116 20:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd1013b8-a054-4ceb-b304-17fa62e41f7d 00:07:42.116 20:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:42.375 20:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:42.375 20:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:42.634 20:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bf33ee48-55b8-41b6-ac50-3b5dd14d87fa 00:07:42.892 20:28:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:43.150 [2024-11-26 20:28:43.484839] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:43.407 20:28:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:43.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:43.666 20:28:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63593 00:07:43.666 20:28:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:43.666 20:28:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:43.666 20:28:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63593 /var/tmp/bdevperf.sock 00:07:43.666 20:28:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63593 ']' 00:07:43.666 20:28:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:43.666 20:28:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.666 20:28:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:43.666 20:28:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.666 20:28:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:43.666 [2024-11-26 20:28:43.812614] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:07:43.667 [2024-11-26 20:28:43.812698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63593 ] 00:07:43.667 [2024-11-26 20:28:43.967814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.925 [2024-11-26 20:28:44.034336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.925 [2024-11-26 20:28:44.092051] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.493 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.493 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:44.493 20:28:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:44.752 Nvme0n1 00:07:45.010 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:45.268 [ 00:07:45.268 { 00:07:45.268 "name": "Nvme0n1", 00:07:45.268 "aliases": [ 00:07:45.268 "bf33ee48-55b8-41b6-ac50-3b5dd14d87fa" 00:07:45.268 ], 00:07:45.268 "product_name": "NVMe disk", 00:07:45.268 "block_size": 4096, 00:07:45.268 "num_blocks": 38912, 00:07:45.268 "uuid": "bf33ee48-55b8-41b6-ac50-3b5dd14d87fa", 00:07:45.268 "numa_id": -1, 00:07:45.268 "assigned_rate_limits": { 00:07:45.268 "rw_ios_per_sec": 0, 00:07:45.268 "rw_mbytes_per_sec": 0, 00:07:45.268 "r_mbytes_per_sec": 0, 00:07:45.268 "w_mbytes_per_sec": 0 00:07:45.268 }, 00:07:45.268 "claimed": false, 00:07:45.268 "zoned": false, 00:07:45.268 "supported_io_types": { 00:07:45.268 "read": true, 00:07:45.268 "write": true, 00:07:45.268 "unmap": true, 00:07:45.268 "flush": true, 00:07:45.268 "reset": true, 00:07:45.268 "nvme_admin": true, 00:07:45.268 "nvme_io": true, 00:07:45.269 "nvme_io_md": false, 00:07:45.269 "write_zeroes": true, 00:07:45.269 "zcopy": false, 00:07:45.269 "get_zone_info": false, 00:07:45.269 "zone_management": false, 00:07:45.269 "zone_append": false, 00:07:45.269 "compare": true, 00:07:45.269 "compare_and_write": true, 00:07:45.269 "abort": true, 00:07:45.269 "seek_hole": false, 00:07:45.269 "seek_data": false, 00:07:45.269 "copy": true, 00:07:45.269 "nvme_iov_md": false 00:07:45.269 }, 00:07:45.269 "memory_domains": [ 00:07:45.269 { 00:07:45.269 "dma_device_id": "system", 00:07:45.269 "dma_device_type": 1 00:07:45.269 } 00:07:45.269 ], 00:07:45.269 "driver_specific": { 00:07:45.269 "nvme": [ 00:07:45.269 { 00:07:45.269 "trid": { 00:07:45.269 "trtype": "TCP", 00:07:45.269 "adrfam": "IPv4", 00:07:45.269 "traddr": "10.0.0.3", 00:07:45.269 "trsvcid": "4420", 00:07:45.269 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:45.269 }, 00:07:45.269 "ctrlr_data": { 00:07:45.269 "cntlid": 1, 00:07:45.269 "vendor_id": "0x8086", 00:07:45.269 "model_number": "SPDK bdev Controller", 00:07:45.269 "serial_number": "SPDK0", 00:07:45.269 "firmware_revision": "25.01", 00:07:45.269 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:45.269 "oacs": { 00:07:45.269 "security": 0, 00:07:45.269 "format": 0, 00:07:45.269 "firmware": 0, 00:07:45.269 "ns_manage": 0 00:07:45.269 }, 00:07:45.269 "multi_ctrlr": true, 00:07:45.269 "ana_reporting": false 00:07:45.269 }, 00:07:45.269 "vs": { 00:07:45.269 "nvme_version": "1.3" 00:07:45.269 }, 00:07:45.269 "ns_data": { 00:07:45.269 "id": 1, 00:07:45.269 "can_share": true 00:07:45.269 } 00:07:45.269 } 00:07:45.269 ], 00:07:45.269 "mp_policy": "active_passive" 00:07:45.269 } 00:07:45.269 } 00:07:45.269 ] 00:07:45.269 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63621 00:07:45.269 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:45.269 20:28:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:45.269 Running I/O for 10 seconds... 00:07:46.203 Latency(us) 00:07:46.203 [2024-11-26T20:28:46.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.204 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.204 Nvme0n1 : 1.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:07:46.204 [2024-11-26T20:28:46.559Z] =================================================================================================================== 00:07:46.204 [2024-11-26T20:28:46.559Z] Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:07:46.204 00:07:47.139 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cd1013b8-a054-4ceb-b304-17fa62e41f7d 00:07:47.397 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.397 Nvme0n1 : 2.00 7429.50 29.02 0.00 0.00 0.00 0.00 0.00 00:07:47.397 [2024-11-26T20:28:47.752Z] =================================================================================================================== 00:07:47.397 [2024-11-26T20:28:47.752Z] Total : 7429.50 29.02 0.00 0.00 0.00 0.00 0.00 00:07:47.397 00:07:47.397 true 00:07:47.397 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd1013b8-a054-4ceb-b304-17fa62e41f7d 00:07:47.397 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:47.968 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:47.968 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:47.968 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63621 00:07:48.226 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.226 Nvme0n1 : 3.00 7281.33 28.44 0.00 0.00 0.00 0.00 0.00 00:07:48.226 [2024-11-26T20:28:48.581Z] =================================================================================================================== 00:07:48.226 [2024-11-26T20:28:48.581Z] Total : 7281.33 28.44 0.00 0.00 0.00 0.00 0.00 00:07:48.226 00:07:49.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.602 Nvme0n1 : 4.00 7207.25 28.15 0.00 0.00 0.00 0.00 0.00 00:07:49.602 [2024-11-26T20:28:49.957Z] =================================================================================================================== 00:07:49.602 [2024-11-26T20:28:49.957Z] Total : 7207.25 28.15 0.00 0.00 0.00 0.00 0.00 00:07:49.602 00:07:50.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.169 Nvme0n1 : 5.00 7137.40 27.88 0.00 0.00 0.00 0.00 0.00 00:07:50.169 [2024-11-26T20:28:50.524Z] =================================================================================================================== 00:07:50.169 [2024-11-26T20:28:50.524Z] Total : 7137.40 27.88 0.00 0.00 0.00 0.00 0.00 00:07:50.169 00:07:51.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.545 Nvme0n1 : 6.00 7090.83 27.70 0.00 0.00 0.00 0.00 0.00 00:07:51.545 [2024-11-26T20:28:51.900Z] =================================================================================================================== 00:07:51.545 [2024-11-26T20:28:51.900Z] Total : 7090.83 27.70 0.00 0.00 0.00 0.00 0.00 00:07:51.545 00:07:52.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.481 Nvme0n1 : 7.00 7039.43 27.50 0.00 0.00 0.00 0.00 0.00 00:07:52.481 [2024-11-26T20:28:52.836Z] =================================================================================================================== 00:07:52.481 [2024-11-26T20:28:52.836Z] Total : 7039.43 27.50 0.00 0.00 0.00 0.00 0.00 00:07:52.481 00:07:53.417 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.417 Nvme0n1 : 8.00 7016.75 27.41 0.00 0.00 0.00 0.00 0.00 00:07:53.417 [2024-11-26T20:28:53.772Z] =================================================================================================================== 00:07:53.417 [2024-11-26T20:28:53.772Z] Total : 7016.75 27.41 0.00 0.00 0.00 0.00 0.00 00:07:53.417 00:07:54.357 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.357 Nvme0n1 : 9.00 6999.11 27.34 0.00 0.00 0.00 0.00 0.00 00:07:54.357 [2024-11-26T20:28:54.712Z] =================================================================================================================== 00:07:54.357 [2024-11-26T20:28:54.712Z] Total : 6999.11 27.34 0.00 0.00 0.00 0.00 0.00 00:07:54.357 00:07:55.299 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.299 Nvme0n1 : 10.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:07:55.299 [2024-11-26T20:28:55.654Z] =================================================================================================================== 00:07:55.299 [2024-11-26T20:28:55.654Z] Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:07:55.299 00:07:55.299 00:07:55.299 Latency(us) 00:07:55.299 [2024-11-26T20:28:55.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.299 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.299 Nvme0n1 : 10.01 6987.67 27.30 0.00 0.00 18311.57 13405.09 72447.07 00:07:55.299 [2024-11-26T20:28:55.654Z] =================================================================================================================== 00:07:55.299 [2024-11-26T20:28:55.654Z] Total : 6987.67 27.30 0.00 0.00 18311.57 13405.09 72447.07 00:07:55.299 { 00:07:55.299 "results": [ 00:07:55.299 { 00:07:55.299 "job": "Nvme0n1", 00:07:55.299 "core_mask": "0x2", 00:07:55.299 "workload": "randwrite", 00:07:55.299 "status": "finished", 00:07:55.299 "queue_depth": 128, 00:07:55.299 "io_size": 4096, 00:07:55.299 "runtime": 10.014502, 00:07:55.299 "iops": 6987.666486061913, 00:07:55.299 "mibps": 27.295572211179348, 00:07:55.299 "io_failed": 0, 00:07:55.299 "io_timeout": 0, 00:07:55.299 "avg_latency_us": 18311.57443497826, 00:07:55.299 "min_latency_us": 13405.09090909091, 00:07:55.299 "max_latency_us": 72447.06909090909 00:07:55.299 } 00:07:55.299 ], 00:07:55.299 "core_count": 1 00:07:55.299 } 00:07:55.299 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63593 00:07:55.299 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63593 ']' 00:07:55.299 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63593 00:07:55.299 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:55.299 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.299 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63593 00:07:55.299 killing process with pid 63593 00:07:55.299 Received shutdown signal, test time was about 10.000000 seconds 00:07:55.299 00:07:55.299 Latency(us) 00:07:55.299 [2024-11-26T20:28:55.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.299 [2024-11-26T20:28:55.654Z] =================================================================================================================== 00:07:55.299 [2024-11-26T20:28:55.654Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:55.299 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:55.299 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:55.299 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63593' 00:07:55.299 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63593 00:07:55.299 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63593 00:07:55.559 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:55.818 20:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:56.078 20:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd1013b8-a054-4ceb-b304-17fa62e41f7d 00:07:56.078 20:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:56.336 20:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:56.336 20:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:56.336 20:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63235 00:07:56.336 20:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63235 00:07:56.595 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63235 Killed "${NVMF_APP[@]}" "$@" 00:07:56.595 20:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:56.595 20:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:56.595 20:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:56.595 20:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:56.595 20:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:56.595 20:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63759 00:07:56.595 20:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:56.595 20:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63759 00:07:56.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.595 20:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63759 ']' 00:07:56.595 20:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.595 20:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.595 20:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.595 20:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.595 20:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:56.595 [2024-11-26 20:28:56.790351] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:07:56.595 [2024-11-26 20:28:56.790677] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.595 [2024-11-26 20:28:56.943195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.854 [2024-11-26 20:28:57.005023] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.854 [2024-11-26 20:28:57.005290] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.854 [2024-11-26 20:28:57.005441] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.854 [2024-11-26 20:28:57.005577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.854 [2024-11-26 20:28:57.005593] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.854 [2024-11-26 20:28:57.006010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.854 [2024-11-26 20:28:57.062077] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.422 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.422 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:57.422 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:57.422 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:57.422 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:57.682 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.682 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:57.941 [2024-11-26 20:28:58.067179] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:57.941 [2024-11-26 20:28:58.067517] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:57.941 [2024-11-26 20:28:58.067799] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:57.941 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:57.941 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev bf33ee48-55b8-41b6-ac50-3b5dd14d87fa 00:07:57.941 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=bf33ee48-55b8-41b6-ac50-3b5dd14d87fa 00:07:57.941 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:57.941 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:57.941 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:57.941 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:57.941 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:58.200 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bf33ee48-55b8-41b6-ac50-3b5dd14d87fa -t 2000 00:07:58.459 [ 00:07:58.459 { 00:07:58.459 "name": "bf33ee48-55b8-41b6-ac50-3b5dd14d87fa", 00:07:58.459 "aliases": [ 00:07:58.459 "lvs/lvol" 00:07:58.459 ], 00:07:58.459 "product_name": "Logical Volume", 00:07:58.459 "block_size": 4096, 00:07:58.459 "num_blocks": 38912, 00:07:58.459 "uuid": "bf33ee48-55b8-41b6-ac50-3b5dd14d87fa", 00:07:58.459 "assigned_rate_limits": { 00:07:58.459 "rw_ios_per_sec": 0, 00:07:58.459 "rw_mbytes_per_sec": 0, 00:07:58.459 "r_mbytes_per_sec": 0, 00:07:58.459 "w_mbytes_per_sec": 0 00:07:58.459 }, 00:07:58.459 "claimed": false, 00:07:58.459 "zoned": false, 00:07:58.459 "supported_io_types": { 00:07:58.459 "read": true, 00:07:58.459 "write": true, 00:07:58.459 "unmap": true, 00:07:58.459 "flush": false, 00:07:58.459 "reset": true, 00:07:58.459 "nvme_admin": false, 00:07:58.459 "nvme_io": false, 00:07:58.459 "nvme_io_md": false, 00:07:58.459 "write_zeroes": true, 00:07:58.459 "zcopy": false, 00:07:58.459 "get_zone_info": false, 00:07:58.459 "zone_management": false, 00:07:58.459 "zone_append": false, 00:07:58.459 "compare": false, 00:07:58.459 "compare_and_write": false, 00:07:58.459 "abort": false, 00:07:58.459 "seek_hole": true, 00:07:58.459 "seek_data": true, 00:07:58.459 "copy": false, 00:07:58.459 "nvme_iov_md": false 00:07:58.459 }, 00:07:58.459 "driver_specific": { 00:07:58.459 "lvol": { 00:07:58.459 "lvol_store_uuid": "cd1013b8-a054-4ceb-b304-17fa62e41f7d", 00:07:58.459 "base_bdev": "aio_bdev", 00:07:58.459 "thin_provision": false, 00:07:58.459 "num_allocated_clusters": 38, 00:07:58.459 "snapshot": false, 00:07:58.459 "clone": false, 00:07:58.459 "esnap_clone": false 00:07:58.459 } 00:07:58.459 } 00:07:58.459 } 00:07:58.459 ] 00:07:58.459 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:58.459 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd1013b8-a054-4ceb-b304-17fa62e41f7d 00:07:58.459 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:58.718 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:58.718 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:58.718 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd1013b8-a054-4ceb-b304-17fa62e41f7d 00:07:58.978 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:58.978 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:59.238 [2024-11-26 20:28:59.516591] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:59.238 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd1013b8-a054-4ceb-b304-17fa62e41f7d 00:07:59.238 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:59.238 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd1013b8-a054-4ceb-b304-17fa62e41f7d 00:07:59.238 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.238 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.238 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.238 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.238 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.238 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.238 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.238 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:59.238 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd1013b8-a054-4ceb-b304-17fa62e41f7d 00:07:59.806 request: 00:07:59.806 { 00:07:59.806 "uuid": "cd1013b8-a054-4ceb-b304-17fa62e41f7d", 00:07:59.806 "method": "bdev_lvol_get_lvstores", 00:07:59.806 "req_id": 1 00:07:59.806 } 00:07:59.806 Got JSON-RPC error response 00:07:59.806 response: 00:07:59.806 { 00:07:59.806 "code": -19, 00:07:59.806 "message": "No such device" 00:07:59.806 } 00:07:59.806 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:59.806 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:59.806 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:59.806 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:59.806 20:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:00.065 aio_bdev 00:08:00.065 20:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bf33ee48-55b8-41b6-ac50-3b5dd14d87fa 00:08:00.065 20:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=bf33ee48-55b8-41b6-ac50-3b5dd14d87fa 00:08:00.065 20:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:00.065 20:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:00.065 20:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:00.065 20:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:00.065 20:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:00.324 20:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bf33ee48-55b8-41b6-ac50-3b5dd14d87fa -t 2000 00:08:00.583 [ 00:08:00.583 { 00:08:00.583 "name": "bf33ee48-55b8-41b6-ac50-3b5dd14d87fa", 00:08:00.583 "aliases": [ 00:08:00.583 "lvs/lvol" 00:08:00.583 ], 00:08:00.583 "product_name": "Logical Volume", 00:08:00.583 "block_size": 4096, 00:08:00.583 "num_blocks": 38912, 00:08:00.583 "uuid": "bf33ee48-55b8-41b6-ac50-3b5dd14d87fa", 00:08:00.583 "assigned_rate_limits": { 00:08:00.583 "rw_ios_per_sec": 0, 00:08:00.583 "rw_mbytes_per_sec": 0, 00:08:00.583 "r_mbytes_per_sec": 0, 00:08:00.583 "w_mbytes_per_sec": 0 00:08:00.583 }, 00:08:00.583 "claimed": false, 00:08:00.583 "zoned": false, 00:08:00.583 "supported_io_types": { 00:08:00.583 "read": true, 00:08:00.583 "write": true, 00:08:00.583 "unmap": true, 00:08:00.583 "flush": false, 00:08:00.583 "reset": true, 00:08:00.583 "nvme_admin": false, 00:08:00.583 "nvme_io": false, 00:08:00.583 "nvme_io_md": false, 00:08:00.583 "write_zeroes": true, 00:08:00.583 "zcopy": false, 00:08:00.583 "get_zone_info": false, 00:08:00.583 "zone_management": false, 00:08:00.583 "zone_append": false, 00:08:00.583 "compare": false, 00:08:00.583 "compare_and_write": false, 00:08:00.583 "abort": false, 00:08:00.583 "seek_hole": true, 00:08:00.583 "seek_data": true, 00:08:00.583 "copy": false, 00:08:00.583 "nvme_iov_md": false 00:08:00.583 }, 00:08:00.583 "driver_specific": { 00:08:00.583 "lvol": { 00:08:00.583 "lvol_store_uuid": "cd1013b8-a054-4ceb-b304-17fa62e41f7d", 00:08:00.583 "base_bdev": "aio_bdev", 00:08:00.583 "thin_provision": false, 00:08:00.583 "num_allocated_clusters": 38, 00:08:00.583 "snapshot": false, 00:08:00.583 "clone": false, 00:08:00.583 "esnap_clone": false 00:08:00.583 } 00:08:00.583 } 00:08:00.583 } 00:08:00.583 ] 00:08:00.583 20:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:00.583 20:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd1013b8-a054-4ceb-b304-17fa62e41f7d 00:08:00.583 20:29:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:00.842 20:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:00.842 20:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cd1013b8-a054-4ceb-b304-17fa62e41f7d 00:08:00.842 20:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:01.101 20:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:01.101 20:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete bf33ee48-55b8-41b6-ac50-3b5dd14d87fa 00:08:01.360 20:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cd1013b8-a054-4ceb-b304-17fa62e41f7d 00:08:01.619 20:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:01.878 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:02.447 ************************************ 00:08:02.447 END TEST lvs_grow_dirty 00:08:02.447 ************************************ 00:08:02.447 00:08:02.447 real 0m21.615s 00:08:02.447 user 0m45.062s 00:08:02.447 sys 0m8.264s 00:08:02.447 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.447 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:02.447 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:02.447 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:02.447 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:02.448 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:02.448 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:02.448 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:02.448 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:02.448 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:02.448 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:02.448 nvmf_trace.0 00:08:02.448 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:02.448 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:02.448 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:02.448 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:02.707 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:02.707 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:02.707 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:02.707 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:02.707 rmmod nvme_tcp 00:08:02.707 rmmod nvme_fabrics 00:08:02.707 rmmod nvme_keyring 00:08:02.707 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:02.707 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:02.707 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:02.707 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63759 ']' 00:08:02.707 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63759 00:08:02.707 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63759 ']' 00:08:02.707 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63759 00:08:02.707 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:02.707 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.707 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63759 00:08:02.707 killing process with pid 63759 00:08:02.707 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.707 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.707 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63759' 00:08:02.707 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63759 00:08:02.707 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63759 00:08:02.967 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:02.967 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:02.967 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:02.967 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:02.967 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:02.967 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:02.967 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:02.967 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:02.967 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:02.967 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:02.967 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:02.967 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:02.967 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:02.967 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:02.967 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:02.967 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:02.967 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:02.967 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:02.967 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:02.967 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:03.227 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:03.227 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:03.227 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:03.227 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.227 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:03.227 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.227 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:08:03.227 00:08:03.227 real 0m43.397s 00:08:03.227 user 1m10.428s 00:08:03.227 sys 0m11.808s 00:08:03.227 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.227 ************************************ 00:08:03.227 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:03.227 END TEST nvmf_lvs_grow 00:08:03.227 ************************************ 00:08:03.227 20:29:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:03.227 20:29:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:03.227 20:29:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.227 20:29:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:03.227 ************************************ 00:08:03.227 START TEST nvmf_bdev_io_wait 00:08:03.227 ************************************ 00:08:03.227 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:03.227 * Looking for test storage... 00:08:03.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:03.227 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:03.227 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:03.227 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:03.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.487 --rc genhtml_branch_coverage=1 00:08:03.487 --rc genhtml_function_coverage=1 00:08:03.487 --rc genhtml_legend=1 00:08:03.487 --rc geninfo_all_blocks=1 00:08:03.487 --rc geninfo_unexecuted_blocks=1 00:08:03.487 00:08:03.487 ' 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:03.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.487 --rc genhtml_branch_coverage=1 00:08:03.487 --rc genhtml_function_coverage=1 00:08:03.487 --rc genhtml_legend=1 00:08:03.487 --rc geninfo_all_blocks=1 00:08:03.487 --rc geninfo_unexecuted_blocks=1 00:08:03.487 00:08:03.487 ' 00:08:03.487 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:03.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.487 --rc genhtml_branch_coverage=1 00:08:03.487 --rc genhtml_function_coverage=1 00:08:03.487 --rc genhtml_legend=1 00:08:03.487 --rc geninfo_all_blocks=1 00:08:03.488 --rc geninfo_unexecuted_blocks=1 00:08:03.488 00:08:03.488 ' 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:03.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.488 --rc genhtml_branch_coverage=1 00:08:03.488 --rc genhtml_function_coverage=1 00:08:03.488 --rc genhtml_legend=1 00:08:03.488 --rc geninfo_all_blocks=1 00:08:03.488 --rc geninfo_unexecuted_blocks=1 00:08:03.488 00:08:03.488 ' 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:03.488 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.488 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:03.489 Cannot find device "nvmf_init_br" 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:03.489 Cannot find device "nvmf_init_br2" 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:03.489 Cannot find device "nvmf_tgt_br" 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:03.489 Cannot find device "nvmf_tgt_br2" 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:03.489 Cannot find device "nvmf_init_br" 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:03.489 Cannot find device "nvmf_init_br2" 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:03.489 Cannot find device "nvmf_tgt_br" 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:03.489 Cannot find device "nvmf_tgt_br2" 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:03.489 Cannot find device "nvmf_br" 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:03.489 Cannot find device "nvmf_init_if" 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:03.489 Cannot find device "nvmf_init_if2" 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:03.489 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:03.489 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:03.489 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:03.748 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:03.748 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:03.748 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:03.748 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:03.748 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:03.748 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:03.748 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:03.748 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:03.748 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:03.748 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:03.748 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:03.748 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:03.748 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:03.748 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:03.748 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:03.748 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:03.748 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:03.748 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:03.748 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:03.748 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:03.748 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:03.748 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:03.748 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:03.748 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:03.748 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:03.748 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:03.748 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:03.748 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:03.748 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:03.748 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:03.748 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:03.748 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:03.748 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:08:03.748 00:08:03.748 --- 10.0.0.3 ping statistics --- 00:08:03.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.748 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:08:03.748 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:03.748 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:03.749 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:08:03.749 00:08:03.749 --- 10.0.0.4 ping statistics --- 00:08:03.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.749 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:03.749 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:03.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:08:03.749 00:08:03.749 --- 10.0.0.1 ping statistics --- 00:08:03.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.749 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:03.749 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:03.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:08:03.749 00:08:03.749 --- 10.0.0.2 ping statistics --- 00:08:03.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.749 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:08:03.749 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.749 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:08:03.749 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:03.749 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.749 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:03.749 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:03.749 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.749 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:03.749 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:04.008 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:04.008 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:04.008 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:04.008 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:04.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.008 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64137 00:08:04.008 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:04.008 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64137 00:08:04.008 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 64137 ']' 00:08:04.008 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.008 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.008 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.008 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.008 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:04.008 [2024-11-26 20:29:04.186348] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:08:04.008 [2024-11-26 20:29:04.186755] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.008 [2024-11-26 20:29:04.341532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:04.267 [2024-11-26 20:29:04.414561] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.267 [2024-11-26 20:29:04.414842] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.267 [2024-11-26 20:29:04.414865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.267 [2024-11-26 20:29:04.414877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.267 [2024-11-26 20:29:04.414887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.267 [2024-11-26 20:29:04.416212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.267 [2024-11-26 20:29:04.416331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.267 [2024-11-26 20:29:04.416600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.267 [2024-11-26 20:29:04.416606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.267 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.267 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:04.267 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:04.267 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:04.267 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:04.267 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.267 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:04.267 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.267 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:04.267 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.267 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:04.267 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.267 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:04.267 [2024-11-26 20:29:04.566350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.267 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.267 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:04.267 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.267 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:04.267 [2024-11-26 20:29:04.583209] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.267 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.267 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:04.267 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.267 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:04.267 Malloc0 00:08:04.267 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.267 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:04.267 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.267 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:04.527 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.527 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:04.527 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.527 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:04.527 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.527 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:04.527 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.527 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:04.527 [2024-11-26 20:29:04.640088] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:04.527 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.527 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64165 00:08:04.527 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:04.527 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:04.527 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:04.527 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:04.527 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64167 00:08:04.527 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:04.528 { 00:08:04.528 "params": { 00:08:04.528 "name": "Nvme$subsystem", 00:08:04.528 "trtype": "$TEST_TRANSPORT", 00:08:04.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:04.528 "adrfam": "ipv4", 00:08:04.528 "trsvcid": "$NVMF_PORT", 00:08:04.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:04.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:04.528 "hdgst": ${hdgst:-false}, 00:08:04.528 "ddgst": ${ddgst:-false} 00:08:04.528 }, 00:08:04.528 "method": "bdev_nvme_attach_controller" 00:08:04.528 } 00:08:04.528 EOF 00:08:04.528 )") 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64169 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64172 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:04.528 { 00:08:04.528 "params": { 00:08:04.528 "name": "Nvme$subsystem", 00:08:04.528 "trtype": "$TEST_TRANSPORT", 00:08:04.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:04.528 "adrfam": "ipv4", 00:08:04.528 "trsvcid": "$NVMF_PORT", 00:08:04.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:04.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:04.528 "hdgst": ${hdgst:-false}, 00:08:04.528 "ddgst": ${ddgst:-false} 00:08:04.528 }, 00:08:04.528 "method": "bdev_nvme_attach_controller" 00:08:04.528 } 00:08:04.528 EOF 00:08:04.528 )") 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:04.528 { 00:08:04.528 "params": { 00:08:04.528 "name": "Nvme$subsystem", 00:08:04.528 "trtype": "$TEST_TRANSPORT", 00:08:04.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:04.528 "adrfam": "ipv4", 00:08:04.528 "trsvcid": "$NVMF_PORT", 00:08:04.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:04.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:04.528 "hdgst": ${hdgst:-false}, 00:08:04.528 "ddgst": ${ddgst:-false} 00:08:04.528 }, 00:08:04.528 "method": "bdev_nvme_attach_controller" 00:08:04.528 } 00:08:04.528 EOF 00:08:04.528 )") 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:04.528 { 00:08:04.528 "params": { 00:08:04.528 "name": "Nvme$subsystem", 00:08:04.528 "trtype": "$TEST_TRANSPORT", 00:08:04.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:04.528 "adrfam": "ipv4", 00:08:04.528 "trsvcid": "$NVMF_PORT", 00:08:04.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:04.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:04.528 "hdgst": ${hdgst:-false}, 00:08:04.528 "ddgst": ${ddgst:-false} 00:08:04.528 }, 00:08:04.528 "method": "bdev_nvme_attach_controller" 00:08:04.528 } 00:08:04.528 EOF 00:08:04.528 )") 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:04.528 "params": { 00:08:04.528 "name": "Nvme1", 00:08:04.528 "trtype": "tcp", 00:08:04.528 "traddr": "10.0.0.3", 00:08:04.528 "adrfam": "ipv4", 00:08:04.528 "trsvcid": "4420", 00:08:04.528 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:04.528 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:04.528 "hdgst": false, 00:08:04.528 "ddgst": false 00:08:04.528 }, 00:08:04.528 "method": "bdev_nvme_attach_controller" 00:08:04.528 }' 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:04.528 "params": { 00:08:04.528 "name": "Nvme1", 00:08:04.528 "trtype": "tcp", 00:08:04.528 "traddr": "10.0.0.3", 00:08:04.528 "adrfam": "ipv4", 00:08:04.528 "trsvcid": "4420", 00:08:04.528 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:04.528 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:04.528 "hdgst": false, 00:08:04.528 "ddgst": false 00:08:04.528 }, 00:08:04.528 "method": "bdev_nvme_attach_controller" 00:08:04.528 }' 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:04.528 "params": { 00:08:04.528 "name": "Nvme1", 00:08:04.528 "trtype": "tcp", 00:08:04.528 "traddr": "10.0.0.3", 00:08:04.528 "adrfam": "ipv4", 00:08:04.528 "trsvcid": "4420", 00:08:04.528 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:04.528 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:04.528 "hdgst": false, 00:08:04.528 "ddgst": false 00:08:04.528 }, 00:08:04.528 "method": "bdev_nvme_attach_controller" 00:08:04.528 }' 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:04.528 "params": { 00:08:04.528 "name": "Nvme1", 00:08:04.528 "trtype": "tcp", 00:08:04.528 "traddr": "10.0.0.3", 00:08:04.528 "adrfam": "ipv4", 00:08:04.528 "trsvcid": "4420", 00:08:04.528 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:04.528 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:04.528 "hdgst": false, 00:08:04.528 "ddgst": false 00:08:04.528 }, 00:08:04.528 "method": "bdev_nvme_attach_controller" 00:08:04.528 }' 00:08:04.528 [2024-11-26 20:29:04.707751] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:08:04.528 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64165 00:08:04.528 [2024-11-26 20:29:04.708080] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-26 20:29:04.708198] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:08:04.528 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:04.528 [2024-11-26 20:29:04.708358] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:04.528 [2024-11-26 20:29:04.720490] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:08:04.528 [2024-11-26 20:29:04.720765] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:04.528 [2024-11-26 20:29:04.731446] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:08:04.528 [2024-11-26 20:29:04.731692] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:04.787 [2024-11-26 20:29:04.931029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.787 [2024-11-26 20:29:04.985829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:04.787 [2024-11-26 20:29:05.000038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.787 [2024-11-26 20:29:05.004624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.787 [2024-11-26 20:29:05.060908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:04.787 [2024-11-26 20:29:05.074887] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.787 [2024-11-26 20:29:05.081208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.787 [2024-11-26 20:29:05.138371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:05.045 [2024-11-26 20:29:05.152385] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.045 Running I/O for 1 seconds... 00:08:05.045 [2024-11-26 20:29:05.152645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.045 Running I/O for 1 seconds... 00:08:05.045 [2024-11-26 20:29:05.202319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:05.045 [2024-11-26 20:29:05.215143] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.045 Running I/O for 1 seconds... 00:08:05.045 Running I/O for 1 seconds... 00:08:05.979 6850.00 IOPS, 26.76 MiB/s 00:08:05.979 Latency(us) 00:08:05.979 [2024-11-26T20:29:06.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.979 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:05.979 Nvme1n1 : 1.02 6845.44 26.74 0.00 0.00 18507.05 8519.68 36223.53 00:08:05.979 [2024-11-26T20:29:06.334Z] =================================================================================================================== 00:08:05.979 [2024-11-26T20:29:06.334Z] Total : 6845.44 26.74 0.00 0.00 18507.05 8519.68 36223.53 00:08:05.979 166208.00 IOPS, 649.25 MiB/s 00:08:05.979 Latency(us) 00:08:05.979 [2024-11-26T20:29:06.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.979 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:05.979 Nvme1n1 : 1.00 165849.47 647.85 0.00 0.00 767.53 376.09 2129.92 00:08:05.979 [2024-11-26T20:29:06.334Z] =================================================================================================================== 00:08:05.979 [2024-11-26T20:29:06.334Z] Total : 165849.47 647.85 0.00 0.00 767.53 376.09 2129.92 00:08:05.979 8056.00 IOPS, 31.47 MiB/s 00:08:05.979 Latency(us) 00:08:05.979 [2024-11-26T20:29:06.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.979 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:05.979 Nvme1n1 : 1.01 8110.16 31.68 0.00 0.00 15695.63 5630.14 26333.56 00:08:05.979 [2024-11-26T20:29:06.334Z] =================================================================================================================== 00:08:05.979 [2024-11-26T20:29:06.334Z] Total : 8110.16 31.68 0.00 0.00 15695.63 5630.14 26333.56 00:08:06.239 6246.00 IOPS, 24.40 MiB/s 00:08:06.239 Latency(us) 00:08:06.239 [2024-11-26T20:29:06.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.239 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:06.239 Nvme1n1 : 1.01 6332.13 24.73 0.00 0.00 20139.24 5957.82 45756.04 00:08:06.239 [2024-11-26T20:29:06.594Z] =================================================================================================================== 00:08:06.239 [2024-11-26T20:29:06.594Z] Total : 6332.13 24.73 0.00 0.00 20139.24 5957.82 45756.04 00:08:06.239 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64167 00:08:06.239 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64169 00:08:06.239 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64172 00:08:06.239 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:06.239 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.239 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:06.239 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.239 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:06.239 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:06.239 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:06.239 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:06.562 rmmod nvme_tcp 00:08:06.562 rmmod nvme_fabrics 00:08:06.562 rmmod nvme_keyring 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64137 ']' 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64137 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 64137 ']' 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 64137 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64137 00:08:06.562 killing process with pid 64137 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64137' 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 64137 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 64137 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:06.562 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:06.822 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:06.822 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:06.822 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:06.822 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:06.822 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:06.822 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:06.822 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:06.822 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:06.822 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:06.822 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:06.822 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:06.822 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.822 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.822 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.822 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:08:06.822 00:08:06.822 real 0m3.672s 00:08:06.822 user 0m14.390s 00:08:06.822 sys 0m2.219s 00:08:06.822 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.822 ************************************ 00:08:06.822 END TEST nvmf_bdev_io_wait 00:08:06.822 ************************************ 00:08:06.822 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:07.083 ************************************ 00:08:07.083 START TEST nvmf_queue_depth 00:08:07.083 ************************************ 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:07.083 * Looking for test storage... 00:08:07.083 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:07.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.083 --rc genhtml_branch_coverage=1 00:08:07.083 --rc genhtml_function_coverage=1 00:08:07.083 --rc genhtml_legend=1 00:08:07.083 --rc geninfo_all_blocks=1 00:08:07.083 --rc geninfo_unexecuted_blocks=1 00:08:07.083 00:08:07.083 ' 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:07.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.083 --rc genhtml_branch_coverage=1 00:08:07.083 --rc genhtml_function_coverage=1 00:08:07.083 --rc genhtml_legend=1 00:08:07.083 --rc geninfo_all_blocks=1 00:08:07.083 --rc geninfo_unexecuted_blocks=1 00:08:07.083 00:08:07.083 ' 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:07.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.083 --rc genhtml_branch_coverage=1 00:08:07.083 --rc genhtml_function_coverage=1 00:08:07.083 --rc genhtml_legend=1 00:08:07.083 --rc geninfo_all_blocks=1 00:08:07.083 --rc geninfo_unexecuted_blocks=1 00:08:07.083 00:08:07.083 ' 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:07.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.083 --rc genhtml_branch_coverage=1 00:08:07.083 --rc genhtml_function_coverage=1 00:08:07.083 --rc genhtml_legend=1 00:08:07.083 --rc geninfo_all_blocks=1 00:08:07.083 --rc geninfo_unexecuted_blocks=1 00:08:07.083 00:08:07.083 ' 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:07.083 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:07.084 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:07.084 Cannot find device "nvmf_init_br" 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:07.084 Cannot find device "nvmf_init_br2" 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:07.084 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:07.343 Cannot find device "nvmf_tgt_br" 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:07.343 Cannot find device "nvmf_tgt_br2" 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:07.343 Cannot find device "nvmf_init_br" 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:07.343 Cannot find device "nvmf_init_br2" 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:07.343 Cannot find device "nvmf_tgt_br" 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:07.343 Cannot find device "nvmf_tgt_br2" 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:07.343 Cannot find device "nvmf_br" 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:07.343 Cannot find device "nvmf_init_if" 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:07.343 Cannot find device "nvmf_init_if2" 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:07.343 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:07.343 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:07.343 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:07.602 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:07.602 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:08:07.602 00:08:07.602 --- 10.0.0.3 ping statistics --- 00:08:07.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.602 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:07.602 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:07.602 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:08:07.602 00:08:07.602 --- 10.0.0.4 ping statistics --- 00:08:07.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.602 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:07.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:08:07.602 00:08:07.602 --- 10.0.0.1 ping statistics --- 00:08:07.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.602 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:07.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:08:07.602 00:08:07.602 --- 10.0.0.2 ping statistics --- 00:08:07.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.602 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64432 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64432 00:08:07.602 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64432 ']' 00:08:07.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.603 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.603 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.603 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.603 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.603 20:29:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:07.603 [2024-11-26 20:29:07.888582] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:08:07.603 [2024-11-26 20:29:07.888855] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.861 [2024-11-26 20:29:08.043910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.861 [2024-11-26 20:29:08.104214] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.861 [2024-11-26 20:29:08.104306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.861 [2024-11-26 20:29:08.104318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:07.861 [2024-11-26 20:29:08.104327] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:07.861 [2024-11-26 20:29:08.104335] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.861 [2024-11-26 20:29:08.104733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.861 [2024-11-26 20:29:08.159957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:08.121 [2024-11-26 20:29:08.276884] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:08.121 Malloc0 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:08.121 [2024-11-26 20:29:08.329121] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64455 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64455 /var/tmp/bdevperf.sock 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64455 ']' 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:08.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.121 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:08.121 [2024-11-26 20:29:08.393061] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:08:08.121 [2024-11-26 20:29:08.393500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64455 ] 00:08:08.378 [2024-11-26 20:29:08.544561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.378 [2024-11-26 20:29:08.606918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.378 [2024-11-26 20:29:08.665331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.636 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.636 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:08.636 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:08.636 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.636 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:08.636 NVMe0n1 00:08:08.636 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.636 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:08.636 Running I/O for 10 seconds... 00:08:10.949 6345.00 IOPS, 24.79 MiB/s [2024-11-26T20:29:12.240Z] 7069.00 IOPS, 27.61 MiB/s [2024-11-26T20:29:13.176Z] 7228.00 IOPS, 28.23 MiB/s [2024-11-26T20:29:14.112Z] 7456.50 IOPS, 29.13 MiB/s [2024-11-26T20:29:15.109Z] 7575.00 IOPS, 29.59 MiB/s [2024-11-26T20:29:16.102Z] 7631.50 IOPS, 29.81 MiB/s [2024-11-26T20:29:17.034Z] 7668.29 IOPS, 29.95 MiB/s [2024-11-26T20:29:17.971Z] 7705.50 IOPS, 30.10 MiB/s [2024-11-26T20:29:19.348Z] 7732.78 IOPS, 30.21 MiB/s [2024-11-26T20:29:19.348Z] 7729.70 IOPS, 30.19 MiB/s 00:08:18.993 Latency(us) 00:08:18.993 [2024-11-26T20:29:19.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.993 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:18.993 Verification LBA range: start 0x0 length 0x4000 00:08:18.993 NVMe0n1 : 10.08 7767.60 30.34 0.00 0.00 131140.70 16920.20 95325.09 00:08:18.993 [2024-11-26T20:29:19.348Z] =================================================================================================================== 00:08:18.993 [2024-11-26T20:29:19.348Z] Total : 7767.60 30.34 0.00 0.00 131140.70 16920.20 95325.09 00:08:18.993 { 00:08:18.993 "results": [ 00:08:18.993 { 00:08:18.993 "job": "NVMe0n1", 00:08:18.993 "core_mask": "0x1", 00:08:18.993 "workload": "verify", 00:08:18.993 "status": "finished", 00:08:18.993 "verify_range": { 00:08:18.993 "start": 0, 00:08:18.993 "length": 16384 00:08:18.993 }, 00:08:18.993 "queue_depth": 1024, 00:08:18.993 "io_size": 4096, 00:08:18.993 "runtime": 10.081753, 00:08:18.993 "iops": 7767.597559670427, 00:08:18.993 "mibps": 30.342177967462604, 00:08:18.993 "io_failed": 0, 00:08:18.993 "io_timeout": 0, 00:08:18.993 "avg_latency_us": 131140.7008079905, 00:08:18.993 "min_latency_us": 16920.203636363636, 00:08:18.993 "max_latency_us": 95325.09090909091 00:08:18.993 } 00:08:18.993 ], 00:08:18.993 "core_count": 1 00:08:18.993 } 00:08:18.993 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64455 00:08:18.993 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64455 ']' 00:08:18.993 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64455 00:08:18.993 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:18.993 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.993 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64455 00:08:18.993 killing process with pid 64455 00:08:18.993 Received shutdown signal, test time was about 10.000000 seconds 00:08:18.993 00:08:18.993 Latency(us) 00:08:18.993 [2024-11-26T20:29:19.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.993 [2024-11-26T20:29:19.348Z] =================================================================================================================== 00:08:18.993 [2024-11-26T20:29:19.348Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:18.993 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.993 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.993 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64455' 00:08:18.993 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64455 00:08:18.993 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64455 00:08:18.993 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:18.993 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:18.993 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:18.993 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:18.993 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:18.993 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:18.993 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:18.993 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:18.993 rmmod nvme_tcp 00:08:18.993 rmmod nvme_fabrics 00:08:19.252 rmmod nvme_keyring 00:08:19.252 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:19.252 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:19.252 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:19.252 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64432 ']' 00:08:19.252 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64432 00:08:19.252 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64432 ']' 00:08:19.252 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64432 00:08:19.252 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:19.252 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:19.252 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64432 00:08:19.252 killing process with pid 64432 00:08:19.252 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:19.252 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:19.252 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64432' 00:08:19.252 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64432 00:08:19.252 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64432 00:08:19.511 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:19.511 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:19.511 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:19.511 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:19.511 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:19.511 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:19.511 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:19.511 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:19.511 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:19.511 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:19.511 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:19.511 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:19.511 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:19.511 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:19.511 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:19.511 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:19.512 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:19.512 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:19.512 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:19.512 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:19.512 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:19.512 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:19.512 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:19.512 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.512 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.512 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.771 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:08:19.771 00:08:19.771 real 0m12.714s 00:08:19.771 user 0m21.654s 00:08:19.771 sys 0m2.200s 00:08:19.771 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.771 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.771 ************************************ 00:08:19.771 END TEST nvmf_queue_depth 00:08:19.771 ************************************ 00:08:19.771 20:29:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:19.771 20:29:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:19.771 20:29:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.771 20:29:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:19.771 ************************************ 00:08:19.771 START TEST nvmf_target_multipath 00:08:19.771 ************************************ 00:08:19.771 20:29:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:19.771 * Looking for test storage... 00:08:19.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:19.771 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:19.771 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:19.771 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:19.771 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:19.771 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.771 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.771 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.771 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.771 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.771 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.771 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.771 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.771 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.771 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.771 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.771 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:19.771 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:19.771 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.771 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.771 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:20.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.030 --rc genhtml_branch_coverage=1 00:08:20.030 --rc genhtml_function_coverage=1 00:08:20.030 --rc genhtml_legend=1 00:08:20.030 --rc geninfo_all_blocks=1 00:08:20.030 --rc geninfo_unexecuted_blocks=1 00:08:20.030 00:08:20.030 ' 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:20.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.030 --rc genhtml_branch_coverage=1 00:08:20.030 --rc genhtml_function_coverage=1 00:08:20.030 --rc genhtml_legend=1 00:08:20.030 --rc geninfo_all_blocks=1 00:08:20.030 --rc geninfo_unexecuted_blocks=1 00:08:20.030 00:08:20.030 ' 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:20.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.030 --rc genhtml_branch_coverage=1 00:08:20.030 --rc genhtml_function_coverage=1 00:08:20.030 --rc genhtml_legend=1 00:08:20.030 --rc geninfo_all_blocks=1 00:08:20.030 --rc geninfo_unexecuted_blocks=1 00:08:20.030 00:08:20.030 ' 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:20.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.030 --rc genhtml_branch_coverage=1 00:08:20.030 --rc genhtml_function_coverage=1 00:08:20.030 --rc genhtml_legend=1 00:08:20.030 --rc geninfo_all_blocks=1 00:08:20.030 --rc geninfo_unexecuted_blocks=1 00:08:20.030 00:08:20.030 ' 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.030 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:20.031 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:20.031 Cannot find device "nvmf_init_br" 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:20.031 Cannot find device "nvmf_init_br2" 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:20.031 Cannot find device "nvmf_tgt_br" 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:20.031 Cannot find device "nvmf_tgt_br2" 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:20.031 Cannot find device "nvmf_init_br" 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:20.031 Cannot find device "nvmf_init_br2" 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:20.031 Cannot find device "nvmf_tgt_br" 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:20.031 Cannot find device "nvmf_tgt_br2" 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:20.031 Cannot find device "nvmf_br" 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:20.031 Cannot find device "nvmf_init_if" 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:20.031 Cannot find device "nvmf_init_if2" 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:20.031 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:20.031 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:20.031 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:20.032 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:20.032 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:20.032 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:20.291 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:20.291 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:20.292 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:20.292 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:08:20.292 00:08:20.292 --- 10.0.0.3 ping statistics --- 00:08:20.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.292 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:20.292 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:20.292 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.111 ms 00:08:20.292 00:08:20.292 --- 10.0.0.4 ping statistics --- 00:08:20.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.292 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:20.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:20.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:08:20.292 00:08:20.292 --- 10.0.0.1 ping statistics --- 00:08:20.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.292 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:20.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:20.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:08:20.292 00:08:20.292 --- 10.0.0.2 ping statistics --- 00:08:20.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.292 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64831 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64831 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 64831 ']' 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.292 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:20.292 [2024-11-26 20:29:20.628285] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:08:20.292 [2024-11-26 20:29:20.628419] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.551 [2024-11-26 20:29:20.785344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:20.551 [2024-11-26 20:29:20.849680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.551 [2024-11-26 20:29:20.849742] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.551 [2024-11-26 20:29:20.849757] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.551 [2024-11-26 20:29:20.849767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.551 [2024-11-26 20:29:20.849776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.551 [2024-11-26 20:29:20.851065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.551 [2024-11-26 20:29:20.851112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.551 [2024-11-26 20:29:20.851329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.551 [2024-11-26 20:29:20.851332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.809 [2024-11-26 20:29:20.909382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.809 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.809 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:08:20.809 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:20.809 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:20.809 20:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:20.809 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.809 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:21.069 [2024-11-26 20:29:21.268684] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.069 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:21.327 Malloc0 00:08:21.327 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:21.895 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:22.153 20:29:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:22.412 [2024-11-26 20:29:22.592898] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:22.412 20:29:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:08:22.671 [2024-11-26 20:29:22.857022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:08:22.671 20:29:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid=310b31eb-b117-4685-b95a-c58b48fd3835 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:22.671 20:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid=310b31eb-b117-4685-b95a-c58b48fd3835 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:08:22.929 20:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:22.929 20:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:08:22.929 20:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:22.929 20:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:22.929 20:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:08:24.831 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:24.831 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64919 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:24.832 20:29:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:25.090 [global] 00:08:25.090 thread=1 00:08:25.090 invalidate=1 00:08:25.090 rw=randrw 00:08:25.090 time_based=1 00:08:25.090 runtime=6 00:08:25.090 ioengine=libaio 00:08:25.090 direct=1 00:08:25.090 bs=4096 00:08:25.090 iodepth=128 00:08:25.090 norandommap=0 00:08:25.090 numjobs=1 00:08:25.090 00:08:25.090 verify_dump=1 00:08:25.090 verify_backlog=512 00:08:25.090 verify_state_save=0 00:08:25.090 do_verify=1 00:08:25.090 verify=crc32c-intel 00:08:25.090 [job0] 00:08:25.090 filename=/dev/nvme0n1 00:08:25.090 Could not set queue depth (nvme0n1) 00:08:25.090 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:25.090 fio-3.35 00:08:25.090 Starting 1 thread 00:08:26.029 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:26.288 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:26.547 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:26.547 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:26.547 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:26.547 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:26.547 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:26.547 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:26.547 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:26.547 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:26.547 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:26.547 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:26.547 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:26.547 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:26.547 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:27.113 20:29:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:27.113 20:29:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:27.113 20:29:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:27.113 20:29:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:27.113 20:29:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:27.113 20:29:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:27.113 20:29:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:27.113 20:29:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:27.113 20:29:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:27.113 20:29:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:27.113 20:29:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:27.113 20:29:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:27.113 20:29:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:27.113 20:29:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64919 00:08:31.306 00:08:31.306 job0: (groupid=0, jobs=1): err= 0: pid=64940: Tue Nov 26 20:29:31 2024 00:08:31.306 read: IOPS=10.3k, BW=40.3MiB/s (42.3MB/s)(242MiB/6002msec) 00:08:31.306 slat (usec): min=4, max=8083, avg=56.64, stdev=227.12 00:08:31.306 clat (usec): min=1645, max=16252, avg=8492.47, stdev=1510.33 00:08:31.306 lat (usec): min=1666, max=16277, avg=8549.10, stdev=1514.21 00:08:31.306 clat percentiles (usec): 00:08:31.306 | 1.00th=[ 4555], 5.00th=[ 6390], 10.00th=[ 7242], 20.00th=[ 7701], 00:08:31.306 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8291], 60.00th=[ 8455], 00:08:31.306 | 70.00th=[ 8717], 80.00th=[ 9110], 90.00th=[ 9765], 95.00th=[12125], 00:08:31.306 | 99.00th=[13173], 99.50th=[13435], 99.90th=[13829], 99.95th=[14091], 00:08:31.306 | 99.99th=[15139] 00:08:31.306 bw ( KiB/s): min=10448, max=25360, per=51.17%, avg=21140.36, stdev=4665.47, samples=11 00:08:31.306 iops : min= 2612, max= 6340, avg=5285.09, stdev=1166.37, samples=11 00:08:31.306 write: IOPS=5914, BW=23.1MiB/s (24.2MB/s)(125MiB/5407msec); 0 zone resets 00:08:31.306 slat (usec): min=14, max=2484, avg=66.06, stdev=149.85 00:08:31.306 clat (usec): min=2783, max=14206, avg=7338.72, stdev=1265.45 00:08:31.306 lat (usec): min=2812, max=14239, avg=7404.78, stdev=1269.96 00:08:31.306 clat percentiles (usec): 00:08:31.306 | 1.00th=[ 3523], 5.00th=[ 4555], 10.00th=[ 5604], 20.00th=[ 6849], 00:08:31.306 | 30.00th=[ 7111], 40.00th=[ 7308], 50.00th=[ 7504], 60.00th=[ 7701], 00:08:31.306 | 70.00th=[ 7832], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8717], 00:08:31.306 | 99.00th=[11338], 99.50th=[11863], 99.90th=[13173], 99.95th=[13566], 00:08:31.306 | 99.99th=[14091] 00:08:31.306 bw ( KiB/s): min=11040, max=25096, per=89.57%, avg=21191.27, stdev=4435.42, samples=11 00:08:31.306 iops : min= 2760, max= 6274, avg=5297.82, stdev=1108.85, samples=11 00:08:31.306 lat (msec) : 2=0.01%, 4=1.07%, 10=92.05%, 20=6.86% 00:08:31.306 cpu : usr=5.88%, sys=24.36%, ctx=5440, majf=0, minf=102 00:08:31.306 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:31.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:31.306 issued rwts: total=61995,31981,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:31.306 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:31.306 00:08:31.306 Run status group 0 (all jobs): 00:08:31.306 READ: bw=40.3MiB/s (42.3MB/s), 40.3MiB/s-40.3MiB/s (42.3MB/s-42.3MB/s), io=242MiB (254MB), run=6002-6002msec 00:08:31.306 WRITE: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=125MiB (131MB), run=5407-5407msec 00:08:31.306 00:08:31.306 Disk stats (read/write): 00:08:31.306 nvme0n1: ios=61101/31432, merge=0/0, ticks=494793/214731, in_queue=709524, util=98.68% 00:08:31.306 20:29:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:31.566 20:29:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:08:31.825 20:29:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:31.825 20:29:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:31.825 20:29:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:31.825 20:29:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:31.825 20:29:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:31.825 20:29:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:31.825 20:29:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:31.825 20:29:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:31.825 20:29:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:31.825 20:29:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:31.825 20:29:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:31.825 20:29:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:31.825 20:29:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:31.825 20:29:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:31.825 20:29:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65027 00:08:31.825 20:29:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:32.084 [global] 00:08:32.084 thread=1 00:08:32.084 invalidate=1 00:08:32.084 rw=randrw 00:08:32.084 time_based=1 00:08:32.084 runtime=6 00:08:32.084 ioengine=libaio 00:08:32.084 direct=1 00:08:32.084 bs=4096 00:08:32.084 iodepth=128 00:08:32.084 norandommap=0 00:08:32.084 numjobs=1 00:08:32.084 00:08:32.084 verify_dump=1 00:08:32.084 verify_backlog=512 00:08:32.084 verify_state_save=0 00:08:32.084 do_verify=1 00:08:32.084 verify=crc32c-intel 00:08:32.084 [job0] 00:08:32.084 filename=/dev/nvme0n1 00:08:32.084 Could not set queue depth (nvme0n1) 00:08:32.084 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:32.084 fio-3.35 00:08:32.084 Starting 1 thread 00:08:33.020 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:33.279 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:33.538 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:33.538 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:33.538 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:33.538 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:33.538 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:33.538 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:33.538 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:33.538 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:33.538 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:33.538 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:33.538 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:33.538 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:33.538 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:33.796 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:34.363 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:34.363 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:34.363 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:34.363 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:34.363 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:34.363 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:34.363 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:34.363 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:34.363 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:34.363 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:34.363 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:34.363 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:34.363 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65027 00:08:38.545 00:08:38.546 job0: (groupid=0, jobs=1): err= 0: pid=65048: Tue Nov 26 20:29:38 2024 00:08:38.546 read: IOPS=11.3k, BW=44.3MiB/s (46.4MB/s)(266MiB/6006msec) 00:08:38.546 slat (usec): min=2, max=7079, avg=43.66, stdev=193.90 00:08:38.546 clat (usec): min=294, max=21753, avg=7737.75, stdev=2342.70 00:08:38.546 lat (usec): min=305, max=21766, avg=7781.42, stdev=2356.20 00:08:38.546 clat percentiles (usec): 00:08:38.546 | 1.00th=[ 1713], 5.00th=[ 3621], 10.00th=[ 4621], 20.00th=[ 5932], 00:08:38.546 | 30.00th=[ 7177], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8291], 00:08:38.546 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9896], 95.00th=[11994], 00:08:38.546 | 99.00th=[13698], 99.50th=[16188], 99.90th=[19268], 99.95th=[20317], 00:08:38.546 | 99.99th=[21103] 00:08:38.546 bw ( KiB/s): min=14792, max=35608, per=52.67%, avg=23874.18, stdev=6759.74, samples=11 00:08:38.546 iops : min= 3698, max= 8902, avg=5968.55, stdev=1689.94, samples=11 00:08:38.546 write: IOPS=6528, BW=25.5MiB/s (26.7MB/s)(141MiB/5510msec); 0 zone resets 00:08:38.546 slat (usec): min=13, max=3036, avg=55.74, stdev=140.03 00:08:38.546 clat (usec): min=261, max=19128, avg=6556.05, stdev=2057.98 00:08:38.546 lat (usec): min=296, max=19153, avg=6611.79, stdev=2071.29 00:08:38.546 clat percentiles (usec): 00:08:38.546 | 1.00th=[ 1483], 5.00th=[ 3097], 10.00th=[ 3654], 20.00th=[ 4490], 00:08:38.546 | 30.00th=[ 5407], 40.00th=[ 6783], 50.00th=[ 7177], 60.00th=[ 7439], 00:08:38.546 | 70.00th=[ 7701], 80.00th=[ 7963], 90.00th=[ 8455], 95.00th=[ 9241], 00:08:38.546 | 99.00th=[11600], 99.50th=[12256], 99.90th=[15008], 99.95th=[16057], 00:08:38.546 | 99.99th=[19006] 00:08:38.546 bw ( KiB/s): min=15216, max=36480, per=91.45%, avg=23880.73, stdev=6668.61, samples=11 00:08:38.546 iops : min= 3804, max= 9120, avg=5970.18, stdev=1667.15, samples=11 00:08:38.546 lat (usec) : 500=0.04%, 750=0.09%, 1000=0.18% 00:08:38.546 lat (msec) : 2=1.14%, 4=7.75%, 10=83.56%, 20=7.20%, 50=0.05% 00:08:38.546 cpu : usr=6.56%, sys=23.23%, ctx=6095, majf=0, minf=90 00:08:38.546 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:38.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:38.546 issued rwts: total=68061,35970,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:38.546 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:38.546 00:08:38.546 Run status group 0 (all jobs): 00:08:38.546 READ: bw=44.3MiB/s (46.4MB/s), 44.3MiB/s-44.3MiB/s (46.4MB/s-46.4MB/s), io=266MiB (279MB), run=6006-6006msec 00:08:38.546 WRITE: bw=25.5MiB/s (26.7MB/s), 25.5MiB/s-25.5MiB/s (26.7MB/s-26.7MB/s), io=141MiB (147MB), run=5510-5510msec 00:08:38.546 00:08:38.546 Disk stats (read/write): 00:08:38.546 nvme0n1: ios=67411/35152, merge=0/0, ticks=497552/214057, in_queue=711609, util=98.68% 00:08:38.546 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:38.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:38.546 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:38.546 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:08:38.546 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:38.546 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:38.546 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:38.546 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:38.546 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:08:38.546 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:38.806 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:38.806 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:38.806 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:38.806 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:08:38.806 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:38.806 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:38.806 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:38.806 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:38.806 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:38.806 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:38.806 rmmod nvme_tcp 00:08:38.806 rmmod nvme_fabrics 00:08:38.806 rmmod nvme_keyring 00:08:38.806 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:38.806 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:38.806 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:38.806 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64831 ']' 00:08:38.806 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64831 00:08:38.806 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 64831 ']' 00:08:38.806 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 64831 00:08:38.806 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:08:38.806 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:38.806 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64831 00:08:38.806 killing process with pid 64831 00:08:38.806 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:38.806 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:38.806 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64831' 00:08:38.806 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 64831 00:08:38.806 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 64831 00:08:39.064 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:39.064 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:39.064 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:39.064 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:39.064 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:39.064 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:39.064 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:39.064 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:39.064 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:39.064 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:39.064 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:39.064 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:39.064 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:39.064 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:39.064 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:39.064 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:39.064 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:39.064 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:39.064 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:39.064 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:39.323 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:39.323 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:39.323 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:39.323 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.323 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.323 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.323 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:08:39.323 ************************************ 00:08:39.323 END TEST nvmf_target_multipath 00:08:39.323 ************************************ 00:08:39.323 00:08:39.323 real 0m19.533s 00:08:39.323 user 1m12.952s 00:08:39.323 sys 0m9.893s 00:08:39.323 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.323 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:39.323 20:29:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:39.323 20:29:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:39.323 20:29:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.323 20:29:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:39.323 ************************************ 00:08:39.323 START TEST nvmf_zcopy 00:08:39.323 ************************************ 00:08:39.323 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:39.323 * Looking for test storage... 00:08:39.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:39.323 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:39.323 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:39.323 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:39.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.584 --rc genhtml_branch_coverage=1 00:08:39.584 --rc genhtml_function_coverage=1 00:08:39.584 --rc genhtml_legend=1 00:08:39.584 --rc geninfo_all_blocks=1 00:08:39.584 --rc geninfo_unexecuted_blocks=1 00:08:39.584 00:08:39.584 ' 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:39.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.584 --rc genhtml_branch_coverage=1 00:08:39.584 --rc genhtml_function_coverage=1 00:08:39.584 --rc genhtml_legend=1 00:08:39.584 --rc geninfo_all_blocks=1 00:08:39.584 --rc geninfo_unexecuted_blocks=1 00:08:39.584 00:08:39.584 ' 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:39.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.584 --rc genhtml_branch_coverage=1 00:08:39.584 --rc genhtml_function_coverage=1 00:08:39.584 --rc genhtml_legend=1 00:08:39.584 --rc geninfo_all_blocks=1 00:08:39.584 --rc geninfo_unexecuted_blocks=1 00:08:39.584 00:08:39.584 ' 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:39.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.584 --rc genhtml_branch_coverage=1 00:08:39.584 --rc genhtml_function_coverage=1 00:08:39.584 --rc genhtml_legend=1 00:08:39.584 --rc geninfo_all_blocks=1 00:08:39.584 --rc geninfo_unexecuted_blocks=1 00:08:39.584 00:08:39.584 ' 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.584 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:39.585 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:39.585 Cannot find device "nvmf_init_br" 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:39.585 Cannot find device "nvmf_init_br2" 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:39.585 Cannot find device "nvmf_tgt_br" 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:39.585 Cannot find device "nvmf_tgt_br2" 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:39.585 Cannot find device "nvmf_init_br" 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:39.585 Cannot find device "nvmf_init_br2" 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:39.585 Cannot find device "nvmf_tgt_br" 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:39.585 Cannot find device "nvmf_tgt_br2" 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:39.585 Cannot find device "nvmf_br" 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:39.585 Cannot find device "nvmf_init_if" 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:39.585 Cannot find device "nvmf_init_if2" 00:08:39.585 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:08:39.586 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:39.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:39.586 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:08:39.586 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:39.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:39.586 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:08:39.586 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:39.586 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:39.586 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:39.586 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:39.855 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:39.855 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:39.855 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:39.855 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:39.855 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:08:39.855 00:08:39.855 --- 10.0.0.3 ping statistics --- 00:08:39.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.855 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:39.855 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:39.855 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:08:39.855 00:08:39.855 --- 10.0.0.4 ping statistics --- 00:08:39.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.855 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:39.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:08:39.855 00:08:39.855 --- 10.0.0.1 ping statistics --- 00:08:39.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.855 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:08:39.855 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:39.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:08:39.855 00:08:39.855 --- 10.0.0.2 ping statistics --- 00:08:39.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.856 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:08:39.856 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.856 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:08:39.856 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:39.856 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.856 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:39.856 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:39.856 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.856 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:39.856 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:39.856 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:39.856 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:39.856 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:39.856 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:39.856 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65346 00:08:39.856 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:39.856 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65346 00:08:39.856 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65346 ']' 00:08:39.856 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.856 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.856 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.856 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.856 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:40.141 [2024-11-26 20:29:40.247652] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:08:40.141 [2024-11-26 20:29:40.247767] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.141 [2024-11-26 20:29:40.402571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.141 [2024-11-26 20:29:40.471573] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.141 [2024-11-26 20:29:40.471629] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.141 [2024-11-26 20:29:40.471644] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.141 [2024-11-26 20:29:40.471665] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.141 [2024-11-26 20:29:40.471674] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.141 [2024-11-26 20:29:40.472152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.400 [2024-11-26 20:29:40.529372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:40.400 [2024-11-26 20:29:40.651950] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:40.400 [2024-11-26 20:29:40.672095] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:40.400 malloc0 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:40.400 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:40.401 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:40.401 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:40.401 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:40.401 { 00:08:40.401 "params": { 00:08:40.401 "name": "Nvme$subsystem", 00:08:40.401 "trtype": "$TEST_TRANSPORT", 00:08:40.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.401 "adrfam": "ipv4", 00:08:40.401 "trsvcid": "$NVMF_PORT", 00:08:40.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.401 "hdgst": ${hdgst:-false}, 00:08:40.401 "ddgst": ${ddgst:-false} 00:08:40.401 }, 00:08:40.401 "method": "bdev_nvme_attach_controller" 00:08:40.401 } 00:08:40.401 EOF 00:08:40.401 )") 00:08:40.401 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:40.401 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:40.401 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:40.401 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:40.401 "params": { 00:08:40.401 "name": "Nvme1", 00:08:40.401 "trtype": "tcp", 00:08:40.401 "traddr": "10.0.0.3", 00:08:40.401 "adrfam": "ipv4", 00:08:40.401 "trsvcid": "4420", 00:08:40.401 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:40.401 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:40.401 "hdgst": false, 00:08:40.401 "ddgst": false 00:08:40.401 }, 00:08:40.401 "method": "bdev_nvme_attach_controller" 00:08:40.401 }' 00:08:40.659 [2024-11-26 20:29:40.771565] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:08:40.659 [2024-11-26 20:29:40.771687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65377 ] 00:08:40.659 [2024-11-26 20:29:40.922866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.659 [2024-11-26 20:29:41.000274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.918 [2024-11-26 20:29:41.073225] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:40.918 Running I/O for 10 seconds... 00:08:43.233 5881.00 IOPS, 45.95 MiB/s [2024-11-26T20:29:44.526Z] 5907.50 IOPS, 46.15 MiB/s [2024-11-26T20:29:45.461Z] 5902.00 IOPS, 46.11 MiB/s [2024-11-26T20:29:46.396Z] 5905.25 IOPS, 46.13 MiB/s [2024-11-26T20:29:47.333Z] 5910.60 IOPS, 46.18 MiB/s [2024-11-26T20:29:48.318Z] 5911.17 IOPS, 46.18 MiB/s [2024-11-26T20:29:49.253Z] 5915.00 IOPS, 46.21 MiB/s [2024-11-26T20:29:50.627Z] 5922.38 IOPS, 46.27 MiB/s [2024-11-26T20:29:51.560Z] 5919.89 IOPS, 46.25 MiB/s [2024-11-26T20:29:51.560Z] 5922.00 IOPS, 46.27 MiB/s 00:08:51.205 Latency(us) 00:08:51.205 [2024-11-26T20:29:51.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.205 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:51.205 Verification LBA range: start 0x0 length 0x1000 00:08:51.205 Nvme1n1 : 10.02 5922.48 46.27 0.00 0.00 21541.97 1936.29 31218.97 00:08:51.205 [2024-11-26T20:29:51.560Z] =================================================================================================================== 00:08:51.205 [2024-11-26T20:29:51.560Z] Total : 5922.48 46.27 0.00 0.00 21541.97 1936.29 31218.97 00:08:51.205 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65494 00:08:51.205 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:51.205 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:51.205 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:51.205 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:51.205 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:51.205 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:51.205 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:51.205 { 00:08:51.205 "params": { 00:08:51.205 "name": "Nvme$subsystem", 00:08:51.205 "trtype": "$TEST_TRANSPORT", 00:08:51.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:51.205 "adrfam": "ipv4", 00:08:51.205 "trsvcid": "$NVMF_PORT", 00:08:51.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:51.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:51.205 "hdgst": ${hdgst:-false}, 00:08:51.205 "ddgst": ${ddgst:-false} 00:08:51.205 }, 00:08:51.205 "method": "bdev_nvme_attach_controller" 00:08:51.205 } 00:08:51.205 EOF 00:08:51.205 )") 00:08:51.205 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:51.205 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:51.205 [2024-11-26 20:29:51.423812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.205 [2024-11-26 20:29:51.423854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.205 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:51.205 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:51.205 20:29:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:51.205 "params": { 00:08:51.205 "name": "Nvme1", 00:08:51.205 "trtype": "tcp", 00:08:51.205 "traddr": "10.0.0.3", 00:08:51.205 "adrfam": "ipv4", 00:08:51.205 "trsvcid": "4420", 00:08:51.205 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.205 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:51.205 "hdgst": false, 00:08:51.205 "ddgst": false 00:08:51.205 }, 00:08:51.205 "method": "bdev_nvme_attach_controller" 00:08:51.205 }' 00:08:51.205 [2024-11-26 20:29:51.431775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.205 [2024-11-26 20:29:51.431806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.205 [2024-11-26 20:29:51.439779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.205 [2024-11-26 20:29:51.439806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.205 [2024-11-26 20:29:51.451785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.205 [2024-11-26 20:29:51.451816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.205 [2024-11-26 20:29:51.460847] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:08:51.205 [2024-11-26 20:29:51.460915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65494 ] 00:08:51.205 [2024-11-26 20:29:51.463798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.205 [2024-11-26 20:29:51.463830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.205 [2024-11-26 20:29:51.475790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.205 [2024-11-26 20:29:51.475827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.205 [2024-11-26 20:29:51.487781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.205 [2024-11-26 20:29:51.487807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.205 [2024-11-26 20:29:51.499790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.205 [2024-11-26 20:29:51.499817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.205 [2024-11-26 20:29:51.511786] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.205 [2024-11-26 20:29:51.511812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.205 [2024-11-26 20:29:51.523790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.205 [2024-11-26 20:29:51.523817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.206 [2024-11-26 20:29:51.535791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.206 [2024-11-26 20:29:51.535817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.206 [2024-11-26 20:29:51.547795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.206 [2024-11-26 20:29:51.547820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.463 [2024-11-26 20:29:51.559823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.463 [2024-11-26 20:29:51.559851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.463 [2024-11-26 20:29:51.571814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.463 [2024-11-26 20:29:51.571842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.463 [2024-11-26 20:29:51.583809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.463 [2024-11-26 20:29:51.583835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.463 [2024-11-26 20:29:51.595812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.463 [2024-11-26 20:29:51.595837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.463 [2024-11-26 20:29:51.606216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.463 [2024-11-26 20:29:51.607819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.463 [2024-11-26 20:29:51.607845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.463 [2024-11-26 20:29:51.619848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.463 [2024-11-26 20:29:51.619886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.463 [2024-11-26 20:29:51.631829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.463 [2024-11-26 20:29:51.631858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.463 [2024-11-26 20:29:51.643829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.463 [2024-11-26 20:29:51.643856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.463 [2024-11-26 20:29:51.655832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.463 [2024-11-26 20:29:51.655859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.463 [2024-11-26 20:29:51.667842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.463 [2024-11-26 20:29:51.667870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.463 [2024-11-26 20:29:51.668571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.463 [2024-11-26 20:29:51.679835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.463 [2024-11-26 20:29:51.679861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.463 [2024-11-26 20:29:51.691864] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.463 [2024-11-26 20:29:51.691898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.463 [2024-11-26 20:29:51.703870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.463 [2024-11-26 20:29:51.703914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.463 [2024-11-26 20:29:51.715873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.463 [2024-11-26 20:29:51.715907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.463 [2024-11-26 20:29:51.727872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.463 [2024-11-26 20:29:51.727906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.463 [2024-11-26 20:29:51.730756] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.463 [2024-11-26 20:29:51.739870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.463 [2024-11-26 20:29:51.739903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.463 [2024-11-26 20:29:51.751891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.463 [2024-11-26 20:29:51.751926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.463 [2024-11-26 20:29:51.763863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.463 [2024-11-26 20:29:51.763889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.463 [2024-11-26 20:29:51.775865] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.463 [2024-11-26 20:29:51.775890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.463 [2024-11-26 20:29:51.787892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.463 [2024-11-26 20:29:51.787924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.463 [2024-11-26 20:29:51.799900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.463 [2024-11-26 20:29:51.799930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.464 [2024-11-26 20:29:51.811912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.464 [2024-11-26 20:29:51.811942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.721 [2024-11-26 20:29:51.823919] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.721 [2024-11-26 20:29:51.823950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.721 [2024-11-26 20:29:51.835926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.721 [2024-11-26 20:29:51.835955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.721 [2024-11-26 20:29:51.847932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.721 [2024-11-26 20:29:51.847965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.721 Running I/O for 5 seconds... 00:08:51.721 [2024-11-26 20:29:51.855930] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.721 [2024-11-26 20:29:51.855960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.721 [2024-11-26 20:29:51.872903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.721 [2024-11-26 20:29:51.872947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.721 [2024-11-26 20:29:51.889187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.721 [2024-11-26 20:29:51.889234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.721 [2024-11-26 20:29:51.908436] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.721 [2024-11-26 20:29:51.908469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.721 [2024-11-26 20:29:51.923102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.721 [2024-11-26 20:29:51.923135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.721 [2024-11-26 20:29:51.940809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.721 [2024-11-26 20:29:51.940844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.721 [2024-11-26 20:29:51.955858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.721 [2024-11-26 20:29:51.955895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.721 [2024-11-26 20:29:51.965752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.721 [2024-11-26 20:29:51.965784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.721 [2024-11-26 20:29:51.981724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.721 [2024-11-26 20:29:51.981757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.721 [2024-11-26 20:29:51.991216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.721 [2024-11-26 20:29:51.991262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.721 [2024-11-26 20:29:52.006560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.721 [2024-11-26 20:29:52.006593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.721 [2024-11-26 20:29:52.024462] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.721 [2024-11-26 20:29:52.024496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.721 [2024-11-26 20:29:52.039349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.721 [2024-11-26 20:29:52.039382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.721 [2024-11-26 20:29:52.049234] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.721 [2024-11-26 20:29:52.049270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.721 [2024-11-26 20:29:52.065566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.721 [2024-11-26 20:29:52.065599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.979 [2024-11-26 20:29:52.082713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.979 [2024-11-26 20:29:52.082747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.979 [2024-11-26 20:29:52.098630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.979 [2024-11-26 20:29:52.098661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.979 [2024-11-26 20:29:52.115920] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.979 [2024-11-26 20:29:52.115952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.980 [2024-11-26 20:29:52.130670] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.980 [2024-11-26 20:29:52.130701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.980 [2024-11-26 20:29:52.146675] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.980 [2024-11-26 20:29:52.146722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.980 [2024-11-26 20:29:52.163662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.980 [2024-11-26 20:29:52.163693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.980 [2024-11-26 20:29:52.180349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.980 [2024-11-26 20:29:52.180380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.980 [2024-11-26 20:29:52.197953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.980 [2024-11-26 20:29:52.197985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.980 [2024-11-26 20:29:52.213810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.980 [2024-11-26 20:29:52.213841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.980 [2024-11-26 20:29:52.222993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.980 [2024-11-26 20:29:52.223040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.980 [2024-11-26 20:29:52.239175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.980 [2024-11-26 20:29:52.239224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.980 [2024-11-26 20:29:52.255824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.980 [2024-11-26 20:29:52.255856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.980 [2024-11-26 20:29:52.273209] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.980 [2024-11-26 20:29:52.273266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.980 [2024-11-26 20:29:52.289725] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.980 [2024-11-26 20:29:52.289756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.980 [2024-11-26 20:29:52.306418] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.980 [2024-11-26 20:29:52.306449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.980 [2024-11-26 20:29:52.322440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.980 [2024-11-26 20:29:52.322471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.238 [2024-11-26 20:29:52.340534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.238 [2024-11-26 20:29:52.340567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.238 [2024-11-26 20:29:52.355740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.238 [2024-11-26 20:29:52.355773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.238 [2024-11-26 20:29:52.375168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.238 [2024-11-26 20:29:52.375215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.238 [2024-11-26 20:29:52.389479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.238 [2024-11-26 20:29:52.389513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.238 [2024-11-26 20:29:52.405233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.238 [2024-11-26 20:29:52.405300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.238 [2024-11-26 20:29:52.422399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.238 [2024-11-26 20:29:52.422439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.239 [2024-11-26 20:29:52.438385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.239 [2024-11-26 20:29:52.438429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.239 [2024-11-26 20:29:52.448369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.239 [2024-11-26 20:29:52.448401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.239 [2024-11-26 20:29:52.463408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.239 [2024-11-26 20:29:52.463440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.239 [2024-11-26 20:29:52.480031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.239 [2024-11-26 20:29:52.480062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.239 [2024-11-26 20:29:52.496222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.239 [2024-11-26 20:29:52.496282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.239 [2024-11-26 20:29:52.513378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.239 [2024-11-26 20:29:52.513424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.239 [2024-11-26 20:29:52.531334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.239 [2024-11-26 20:29:52.531375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.239 [2024-11-26 20:29:52.546375] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.239 [2024-11-26 20:29:52.546424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.239 [2024-11-26 20:29:52.555933] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.239 [2024-11-26 20:29:52.555967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.239 [2024-11-26 20:29:52.572332] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.239 [2024-11-26 20:29:52.572371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.239 [2024-11-26 20:29:52.588517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.239 [2024-11-26 20:29:52.588556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.497 [2024-11-26 20:29:52.606799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.497 [2024-11-26 20:29:52.606840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.497 [2024-11-26 20:29:52.621251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.497 [2024-11-26 20:29:52.621283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.497 [2024-11-26 20:29:52.637348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.497 [2024-11-26 20:29:52.637380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.497 [2024-11-26 20:29:52.655426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.497 [2024-11-26 20:29:52.655458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.497 [2024-11-26 20:29:52.670000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.497 [2024-11-26 20:29:52.670033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.497 [2024-11-26 20:29:52.686270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.497 [2024-11-26 20:29:52.686297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.497 [2024-11-26 20:29:52.702470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.497 [2024-11-26 20:29:52.702504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.497 [2024-11-26 20:29:52.719805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.497 [2024-11-26 20:29:52.719844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.497 [2024-11-26 20:29:52.734890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.497 [2024-11-26 20:29:52.734944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.497 [2024-11-26 20:29:52.744478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.497 [2024-11-26 20:29:52.744511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.498 [2024-11-26 20:29:52.760405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.498 [2024-11-26 20:29:52.760456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.498 [2024-11-26 20:29:52.776549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.498 [2024-11-26 20:29:52.776590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.498 [2024-11-26 20:29:52.793849] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.498 [2024-11-26 20:29:52.794481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.498 [2024-11-26 20:29:52.810853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.498 [2024-11-26 20:29:52.810889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.498 [2024-11-26 20:29:52.828084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.498 [2024-11-26 20:29:52.828125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.498 [2024-11-26 20:29:52.844242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.498 [2024-11-26 20:29:52.844278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.756 11590.00 IOPS, 90.55 MiB/s [2024-11-26T20:29:53.111Z] [2024-11-26 20:29:52.861689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.756 [2024-11-26 20:29:52.861886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.756 [2024-11-26 20:29:52.876531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.756 [2024-11-26 20:29:52.876699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.756 [2024-11-26 20:29:52.892727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.756 [2024-11-26 20:29:52.892764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.756 [2024-11-26 20:29:52.910046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.756 [2024-11-26 20:29:52.910082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.756 [2024-11-26 20:29:52.927100] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.756 [2024-11-26 20:29:52.927135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.756 [2024-11-26 20:29:52.944950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.756 [2024-11-26 20:29:52.944986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.756 [2024-11-26 20:29:52.959816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.756 [2024-11-26 20:29:52.960022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.756 [2024-11-26 20:29:52.975558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.756 [2024-11-26 20:29:52.975597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.756 [2024-11-26 20:29:52.984651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.756 [2024-11-26 20:29:52.984688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.756 [2024-11-26 20:29:53.000951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.756 [2024-11-26 20:29:53.000988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.756 [2024-11-26 20:29:53.019136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.756 [2024-11-26 20:29:53.019322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.756 [2024-11-26 20:29:53.033769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.756 [2024-11-26 20:29:53.033812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.756 [2024-11-26 20:29:53.050199] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.756 [2024-11-26 20:29:53.050255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.756 [2024-11-26 20:29:53.066478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.756 [2024-11-26 20:29:53.066665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.756 [2024-11-26 20:29:53.082860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.756 [2024-11-26 20:29:53.082899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.756 [2024-11-26 20:29:53.092159] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.756 [2024-11-26 20:29:53.092198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.756 [2024-11-26 20:29:53.108052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.756 [2024-11-26 20:29:53.108093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.015 [2024-11-26 20:29:53.125125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.015 [2024-11-26 20:29:53.125166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.015 [2024-11-26 20:29:53.141605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.015 [2024-11-26 20:29:53.141812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.015 [2024-11-26 20:29:53.157962] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.015 [2024-11-26 20:29:53.158003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.015 [2024-11-26 20:29:53.175703] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.015 [2024-11-26 20:29:53.175743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.015 [2024-11-26 20:29:53.189950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.015 [2024-11-26 20:29:53.189988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.015 [2024-11-26 20:29:53.205640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.015 [2024-11-26 20:29:53.205677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.015 [2024-11-26 20:29:53.224194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.015 [2024-11-26 20:29:53.224247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.015 [2024-11-26 20:29:53.238856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.015 [2024-11-26 20:29:53.238897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.015 [2024-11-26 20:29:53.256072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.015 [2024-11-26 20:29:53.256110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.015 [2024-11-26 20:29:53.272830] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.015 [2024-11-26 20:29:53.273006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.015 [2024-11-26 20:29:53.289270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.015 [2024-11-26 20:29:53.289302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.015 [2024-11-26 20:29:53.307296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.015 [2024-11-26 20:29:53.307332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.015 [2024-11-26 20:29:53.321808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.015 [2024-11-26 20:29:53.321844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.015 [2024-11-26 20:29:53.337846] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.015 [2024-11-26 20:29:53.337883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.015 [2024-11-26 20:29:53.357115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.015 [2024-11-26 20:29:53.357151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.274 [2024-11-26 20:29:53.371818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.275 [2024-11-26 20:29:53.371855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.275 [2024-11-26 20:29:53.381171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.275 [2024-11-26 20:29:53.381209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.275 [2024-11-26 20:29:53.397748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.275 [2024-11-26 20:29:53.397785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.275 [2024-11-26 20:29:53.416874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.275 [2024-11-26 20:29:53.416910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.275 [2024-11-26 20:29:53.431679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.275 [2024-11-26 20:29:53.431716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.275 [2024-11-26 20:29:53.450552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.275 [2024-11-26 20:29:53.450738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.275 [2024-11-26 20:29:53.465565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.275 [2024-11-26 20:29:53.465726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.275 [2024-11-26 20:29:53.475429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.275 [2024-11-26 20:29:53.475464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.275 [2024-11-26 20:29:53.491749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.275 [2024-11-26 20:29:53.491790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.275 [2024-11-26 20:29:53.508967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.275 [2024-11-26 20:29:53.509162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.275 [2024-11-26 20:29:53.523486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.275 [2024-11-26 20:29:53.523527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.275 [2024-11-26 20:29:53.539008] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.275 [2024-11-26 20:29:53.539184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.275 [2024-11-26 20:29:53.556749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.275 [2024-11-26 20:29:53.556788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.275 [2024-11-26 20:29:53.572058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.275 [2024-11-26 20:29:53.572098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.275 [2024-11-26 20:29:53.590526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.275 [2024-11-26 20:29:53.590699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.275 [2024-11-26 20:29:53.605233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.275 [2024-11-26 20:29:53.605272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.275 [2024-11-26 20:29:53.620314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.275 [2024-11-26 20:29:53.620352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.534 [2024-11-26 20:29:53.635762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.534 [2024-11-26 20:29:53.635942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.534 [2024-11-26 20:29:53.652991] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.534 [2024-11-26 20:29:53.653029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.534 [2024-11-26 20:29:53.668665] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.534 [2024-11-26 20:29:53.668702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.534 [2024-11-26 20:29:53.678159] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.534 [2024-11-26 20:29:53.678196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.534 [2024-11-26 20:29:53.694378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.534 [2024-11-26 20:29:53.694552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.534 [2024-11-26 20:29:53.711609] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.534 [2024-11-26 20:29:53.711646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.534 [2024-11-26 20:29:53.728750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.534 [2024-11-26 20:29:53.728787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.534 [2024-11-26 20:29:53.746077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.534 [2024-11-26 20:29:53.746281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.534 [2024-11-26 20:29:53.761964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.534 [2024-11-26 20:29:53.762004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.534 [2024-11-26 20:29:53.778473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.534 [2024-11-26 20:29:53.778511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.534 [2024-11-26 20:29:53.795319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.534 [2024-11-26 20:29:53.795353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.534 [2024-11-26 20:29:53.811691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.534 [2024-11-26 20:29:53.811729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.534 [2024-11-26 20:29:53.828669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.534 [2024-11-26 20:29:53.828838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.534 [2024-11-26 20:29:53.843644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.534 [2024-11-26 20:29:53.843699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.534 11666.50 IOPS, 91.14 MiB/s [2024-11-26T20:29:53.889Z] [2024-11-26 20:29:53.859405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.534 [2024-11-26 20:29:53.859442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.534 [2024-11-26 20:29:53.878854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.534 [2024-11-26 20:29:53.879037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.794 [2024-11-26 20:29:53.893811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.794 [2024-11-26 20:29:53.893977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.794 [2024-11-26 20:29:53.904186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.794 [2024-11-26 20:29:53.904237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.794 [2024-11-26 20:29:53.919200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.794 [2024-11-26 20:29:53.919253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.794 [2024-11-26 20:29:53.935675] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.794 [2024-11-26 20:29:53.935719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.794 [2024-11-26 20:29:53.952190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.794 [2024-11-26 20:29:53.952240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.794 [2024-11-26 20:29:53.970811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.794 [2024-11-26 20:29:53.970849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.794 [2024-11-26 20:29:53.985271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.794 [2024-11-26 20:29:53.985307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.794 [2024-11-26 20:29:54.000242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.794 [2024-11-26 20:29:54.000278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.794 [2024-11-26 20:29:54.015687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.794 [2024-11-26 20:29:54.015881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.794 [2024-11-26 20:29:54.032852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.794 [2024-11-26 20:29:54.032890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.794 [2024-11-26 20:29:54.050488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.794 [2024-11-26 20:29:54.050654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.794 [2024-11-26 20:29:54.065657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.794 [2024-11-26 20:29:54.065822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.794 [2024-11-26 20:29:54.083306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.794 [2024-11-26 20:29:54.083343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.794 [2024-11-26 20:29:54.100444] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.794 [2024-11-26 20:29:54.100606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.794 [2024-11-26 20:29:54.115523] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.794 [2024-11-26 20:29:54.115698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.794 [2024-11-26 20:29:54.132644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.794 [2024-11-26 20:29:54.132680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.053 [2024-11-26 20:29:54.147290] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.053 [2024-11-26 20:29:54.147322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.053 [2024-11-26 20:29:54.162848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.053 [2024-11-26 20:29:54.163013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.053 [2024-11-26 20:29:54.179787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.053 [2024-11-26 20:29:54.179951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.053 [2024-11-26 20:29:54.196487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.053 [2024-11-26 20:29:54.196652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.053 [2024-11-26 20:29:54.213848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.053 [2024-11-26 20:29:54.214017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.053 [2024-11-26 20:29:54.230369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.053 [2024-11-26 20:29:54.230526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.053 [2024-11-26 20:29:54.246588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.053 [2024-11-26 20:29:54.246747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.053 [2024-11-26 20:29:54.263384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.053 [2024-11-26 20:29:54.263553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.053 [2024-11-26 20:29:54.280364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.053 [2024-11-26 20:29:54.280519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.053 [2024-11-26 20:29:54.297266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.053 [2024-11-26 20:29:54.297423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.053 [2024-11-26 20:29:54.315357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.053 [2024-11-26 20:29:54.315513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.053 [2024-11-26 20:29:54.331314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.053 [2024-11-26 20:29:54.331472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.053 [2024-11-26 20:29:54.347552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.053 [2024-11-26 20:29:54.347717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.053 [2024-11-26 20:29:54.365792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.053 [2024-11-26 20:29:54.366037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.053 [2024-11-26 20:29:54.381624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.053 [2024-11-26 20:29:54.381858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.053 [2024-11-26 20:29:54.399111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.053 [2024-11-26 20:29:54.399410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.313 [2024-11-26 20:29:54.415586] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.313 [2024-11-26 20:29:54.415637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.313 [2024-11-26 20:29:54.433166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.313 [2024-11-26 20:29:54.433464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.313 [2024-11-26 20:29:54.448643] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.313 [2024-11-26 20:29:54.448690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.313 [2024-11-26 20:29:54.460551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.313 [2024-11-26 20:29:54.460595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.313 [2024-11-26 20:29:54.476467] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.313 [2024-11-26 20:29:54.476515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.313 [2024-11-26 20:29:54.492711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.313 [2024-11-26 20:29:54.492757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.313 [2024-11-26 20:29:54.510067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.313 [2024-11-26 20:29:54.510114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.313 [2024-11-26 20:29:54.526215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.313 [2024-11-26 20:29:54.526275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.313 [2024-11-26 20:29:54.543280] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.313 [2024-11-26 20:29:54.543325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.313 [2024-11-26 20:29:54.559216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.313 [2024-11-26 20:29:54.559276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.313 [2024-11-26 20:29:54.568807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.313 [2024-11-26 20:29:54.568847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.313 [2024-11-26 20:29:54.585172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.313 [2024-11-26 20:29:54.585235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.313 [2024-11-26 20:29:54.602387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.313 [2024-11-26 20:29:54.602434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.313 [2024-11-26 20:29:54.619877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.313 [2024-11-26 20:29:54.619918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.313 [2024-11-26 20:29:54.634757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.313 [2024-11-26 20:29:54.634802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.313 [2024-11-26 20:29:54.651768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.313 [2024-11-26 20:29:54.651823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.573 [2024-11-26 20:29:54.667724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.573 [2024-11-26 20:29:54.667773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.573 [2024-11-26 20:29:54.684818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.573 [2024-11-26 20:29:54.684867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.573 [2024-11-26 20:29:54.701811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.573 [2024-11-26 20:29:54.702026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.573 [2024-11-26 20:29:54.717166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.573 [2024-11-26 20:29:54.717415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.573 [2024-11-26 20:29:54.733092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.573 [2024-11-26 20:29:54.733353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.573 [2024-11-26 20:29:54.750670] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.573 [2024-11-26 20:29:54.750901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.573 [2024-11-26 20:29:54.766423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.573 [2024-11-26 20:29:54.766662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.573 [2024-11-26 20:29:54.784103] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.573 [2024-11-26 20:29:54.784423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.573 [2024-11-26 20:29:54.798917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.573 [2024-11-26 20:29:54.799262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.573 [2024-11-26 20:29:54.815523] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.573 [2024-11-26 20:29:54.815820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.573 [2024-11-26 20:29:54.831493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.573 [2024-11-26 20:29:54.831845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.573 [2024-11-26 20:29:54.849912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.573 [2024-11-26 20:29:54.850259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.573 11643.33 IOPS, 90.96 MiB/s [2024-11-26T20:29:54.928Z] [2024-11-26 20:29:54.865259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.573 [2024-11-26 20:29:54.865550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.573 [2024-11-26 20:29:54.875141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.573 [2024-11-26 20:29:54.875434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.573 [2024-11-26 20:29:54.891210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.573 [2024-11-26 20:29:54.891528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.573 [2024-11-26 20:29:54.908355] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.573 [2024-11-26 20:29:54.908415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.573 [2024-11-26 20:29:54.924457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.573 [2024-11-26 20:29:54.924510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.832 [2024-11-26 20:29:54.942319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.832 [2024-11-26 20:29:54.942375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.832 [2024-11-26 20:29:54.956983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.832 [2024-11-26 20:29:54.957035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.832 [2024-11-26 20:29:54.972741] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.832 [2024-11-26 20:29:54.972794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.832 [2024-11-26 20:29:54.988924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.832 [2024-11-26 20:29:54.988977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.832 [2024-11-26 20:29:55.006202] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.832 [2024-11-26 20:29:55.006266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.832 [2024-11-26 20:29:55.022907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.832 [2024-11-26 20:29:55.022980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.832 [2024-11-26 20:29:55.040528] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.832 [2024-11-26 20:29:55.040581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.832 [2024-11-26 20:29:55.055409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.832 [2024-11-26 20:29:55.055675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.832 [2024-11-26 20:29:55.072038] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.832 [2024-11-26 20:29:55.072099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.832 [2024-11-26 20:29:55.088360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.832 [2024-11-26 20:29:55.088419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.832 [2024-11-26 20:29:55.105676] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.832 [2024-11-26 20:29:55.105740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.832 [2024-11-26 20:29:55.122032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.832 [2024-11-26 20:29:55.122086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.832 [2024-11-26 20:29:55.138721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.832 [2024-11-26 20:29:55.138972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.832 [2024-11-26 20:29:55.155945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.832 [2024-11-26 20:29:55.156005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.832 [2024-11-26 20:29:55.172126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.832 [2024-11-26 20:29:55.172181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.091 [2024-11-26 20:29:55.189075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.091 [2024-11-26 20:29:55.189124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.091 [2024-11-26 20:29:55.204875] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.091 [2024-11-26 20:29:55.204927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.091 [2024-11-26 20:29:55.220842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.091 [2024-11-26 20:29:55.220893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.091 [2024-11-26 20:29:55.230844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.091 [2024-11-26 20:29:55.230899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.091 [2024-11-26 20:29:55.246640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.091 [2024-11-26 20:29:55.246679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.091 [2024-11-26 20:29:55.264400] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.091 [2024-11-26 20:29:55.264454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.091 [2024-11-26 20:29:55.279245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.091 [2024-11-26 20:29:55.279304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.091 [2024-11-26 20:29:55.288401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.091 [2024-11-26 20:29:55.288444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.092 [2024-11-26 20:29:55.304791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.092 [2024-11-26 20:29:55.304829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.092 [2024-11-26 20:29:55.321803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.092 [2024-11-26 20:29:55.321984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.092 [2024-11-26 20:29:55.338500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.092 [2024-11-26 20:29:55.338540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.092 [2024-11-26 20:29:55.356677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.092 [2024-11-26 20:29:55.356862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.092 [2024-11-26 20:29:55.371525] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.092 [2024-11-26 20:29:55.371709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.092 [2024-11-26 20:29:55.387012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.092 [2024-11-26 20:29:55.387180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.092 [2024-11-26 20:29:55.396586] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.092 [2024-11-26 20:29:55.396623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.092 [2024-11-26 20:29:55.412664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.092 [2024-11-26 20:29:55.412700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.092 [2024-11-26 20:29:55.422600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.092 [2024-11-26 20:29:55.422637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.092 [2024-11-26 20:29:55.438504] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.092 [2024-11-26 20:29:55.438540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.350 [2024-11-26 20:29:55.454726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.350 [2024-11-26 20:29:55.454763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.350 [2024-11-26 20:29:55.466689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.350 [2024-11-26 20:29:55.466725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.350 [2024-11-26 20:29:55.484000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.350 [2024-11-26 20:29:55.484036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.350 [2024-11-26 20:29:55.499015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.350 [2024-11-26 20:29:55.499050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.350 [2024-11-26 20:29:55.508791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.350 [2024-11-26 20:29:55.508828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.350 [2024-11-26 20:29:55.523635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.350 [2024-11-26 20:29:55.523679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.350 [2024-11-26 20:29:55.535544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.350 [2024-11-26 20:29:55.535580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.350 [2024-11-26 20:29:55.553429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.350 [2024-11-26 20:29:55.555057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.350 [2024-11-26 20:29:55.569396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.350 [2024-11-26 20:29:55.569588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.350 [2024-11-26 20:29:55.584620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.350 [2024-11-26 20:29:55.584829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.350 [2024-11-26 20:29:55.602560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.350 [2024-11-26 20:29:55.602603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.350 [2024-11-26 20:29:55.618028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.350 [2024-11-26 20:29:55.618075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.350 [2024-11-26 20:29:55.634576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.350 [2024-11-26 20:29:55.634613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.350 [2024-11-26 20:29:55.650577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.350 [2024-11-26 20:29:55.650616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.350 [2024-11-26 20:29:55.669747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.350 [2024-11-26 20:29:55.669789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.350 [2024-11-26 20:29:55.685085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.350 [2024-11-26 20:29:55.685126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.350 [2024-11-26 20:29:55.702660] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.350 [2024-11-26 20:29:55.702698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.609 [2024-11-26 20:29:55.717457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.609 [2024-11-26 20:29:55.717493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.609 [2024-11-26 20:29:55.726991] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.609 [2024-11-26 20:29:55.727026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.609 [2024-11-26 20:29:55.743537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.609 [2024-11-26 20:29:55.743747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.609 [2024-11-26 20:29:55.760796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.609 [2024-11-26 20:29:55.760842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.609 [2024-11-26 20:29:55.777435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.609 [2024-11-26 20:29:55.777481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.609 [2024-11-26 20:29:55.793407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.609 [2024-11-26 20:29:55.793450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.609 [2024-11-26 20:29:55.810543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.609 [2024-11-26 20:29:55.810798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.609 [2024-11-26 20:29:55.828537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.609 [2024-11-26 20:29:55.828590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.609 [2024-11-26 20:29:55.843414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.609 [2024-11-26 20:29:55.843460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.609 11618.00 IOPS, 90.77 MiB/s [2024-11-26T20:29:55.964Z] [2024-11-26 20:29:55.859235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.609 [2024-11-26 20:29:55.859289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.609 [2024-11-26 20:29:55.875469] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.609 [2024-11-26 20:29:55.875510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.609 [2024-11-26 20:29:55.892745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.609 [2024-11-26 20:29:55.892796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.609 [2024-11-26 20:29:55.908733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.609 [2024-11-26 20:29:55.908775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.609 [2024-11-26 20:29:55.924770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.609 [2024-11-26 20:29:55.924809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.609 [2024-11-26 20:29:55.941850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.609 [2024-11-26 20:29:55.942044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.609 [2024-11-26 20:29:55.958497] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.609 [2024-11-26 20:29:55.958698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.867 [2024-11-26 20:29:55.976611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.867 [2024-11-26 20:29:55.976836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.867 [2024-11-26 20:29:55.991828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.867 [2024-11-26 20:29:55.992050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.867 [2024-11-26 20:29:56.001813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.867 [2024-11-26 20:29:56.002013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.867 [2024-11-26 20:29:56.017971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.867 [2024-11-26 20:29:56.018157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.867 [2024-11-26 20:29:56.036015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.867 [2024-11-26 20:29:56.036244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.867 [2024-11-26 20:29:56.051058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.867 [2024-11-26 20:29:56.051254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.867 [2024-11-26 20:29:56.066925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.867 [2024-11-26 20:29:56.067102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.867 [2024-11-26 20:29:56.077104] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.867 [2024-11-26 20:29:56.077300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.867 [2024-11-26 20:29:56.091651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.867 [2024-11-26 20:29:56.091843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.867 [2024-11-26 20:29:56.109270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.867 [2024-11-26 20:29:56.109456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.867 [2024-11-26 20:29:56.123986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.867 [2024-11-26 20:29:56.124169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.867 [2024-11-26 20:29:56.142000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.867 [2024-11-26 20:29:56.142206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.867 [2024-11-26 20:29:56.156697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.867 [2024-11-26 20:29:56.156870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.867 [2024-11-26 20:29:56.171935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.867 [2024-11-26 20:29:56.172118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.867 [2024-11-26 20:29:56.181062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.867 [2024-11-26 20:29:56.181233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.867 [2024-11-26 20:29:56.197459] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.867 [2024-11-26 20:29:56.197637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.868 [2024-11-26 20:29:56.207277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.868 [2024-11-26 20:29:56.207442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.126 [2024-11-26 20:29:56.222608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.126 [2024-11-26 20:29:56.222649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.126 [2024-11-26 20:29:56.239917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.126 [2024-11-26 20:29:56.239960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.126 [2024-11-26 20:29:56.256918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.126 [2024-11-26 20:29:56.257113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.126 [2024-11-26 20:29:56.272141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.126 [2024-11-26 20:29:56.272320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.126 [2024-11-26 20:29:56.287957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.126 [2024-11-26 20:29:56.288136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.126 [2024-11-26 20:29:56.304376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.126 [2024-11-26 20:29:56.304673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.126 [2024-11-26 20:29:56.321951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.126 [2024-11-26 20:29:56.322171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.126 [2024-11-26 20:29:56.338035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.126 [2024-11-26 20:29:56.338320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.127 [2024-11-26 20:29:56.355443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.127 [2024-11-26 20:29:56.355689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.127 [2024-11-26 20:29:56.371032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.127 [2024-11-26 20:29:56.371298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.127 [2024-11-26 20:29:56.389032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.127 [2024-11-26 20:29:56.389275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.127 [2024-11-26 20:29:56.403939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.127 [2024-11-26 20:29:56.404101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.127 [2024-11-26 20:29:56.420121] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.127 [2024-11-26 20:29:56.420322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.127 [2024-11-26 20:29:56.436583] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.127 [2024-11-26 20:29:56.436807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.127 [2024-11-26 20:29:56.453680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.127 [2024-11-26 20:29:56.453922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.127 [2024-11-26 20:29:56.470098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.127 [2024-11-26 20:29:56.470339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.385 [2024-11-26 20:29:56.486260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.385 [2024-11-26 20:29:56.486472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.385 [2024-11-26 20:29:56.504374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.385 [2024-11-26 20:29:56.504551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.385 [2024-11-26 20:29:56.520537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.385 [2024-11-26 20:29:56.520586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.385 [2024-11-26 20:29:56.537715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.385 [2024-11-26 20:29:56.537761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.385 [2024-11-26 20:29:56.553509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.385 [2024-11-26 20:29:56.553548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.385 [2024-11-26 20:29:56.572214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.385 [2024-11-26 20:29:56.572271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.385 [2024-11-26 20:29:56.587933] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.385 [2024-11-26 20:29:56.587976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.385 [2024-11-26 20:29:56.606337] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.385 [2024-11-26 20:29:56.606374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.385 [2024-11-26 20:29:56.621041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.385 [2024-11-26 20:29:56.621081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.385 [2024-11-26 20:29:56.636023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.385 [2024-11-26 20:29:56.636060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.385 [2024-11-26 20:29:56.653063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.385 [2024-11-26 20:29:56.653102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.385 [2024-11-26 20:29:56.668009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.385 [2024-11-26 20:29:56.668054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.385 [2024-11-26 20:29:56.683219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.385 [2024-11-26 20:29:56.683279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.385 [2024-11-26 20:29:56.692426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.385 [2024-11-26 20:29:56.692462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.385 [2024-11-26 20:29:56.709014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.385 [2024-11-26 20:29:56.709053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.385 [2024-11-26 20:29:56.725632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.385 [2024-11-26 20:29:56.725804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.644 [2024-11-26 20:29:56.742624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.644 [2024-11-26 20:29:56.742660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.644 [2024-11-26 20:29:56.759283] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.644 [2024-11-26 20:29:56.759324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.644 [2024-11-26 20:29:56.775663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.644 [2024-11-26 20:29:56.775710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.644 [2024-11-26 20:29:56.791621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.644 [2024-11-26 20:29:56.791671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.644 [2024-11-26 20:29:56.809541] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.644 [2024-11-26 20:29:56.809586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.644 [2024-11-26 20:29:56.824146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.644 [2024-11-26 20:29:56.824198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.644 [2024-11-26 20:29:56.839161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.644 [2024-11-26 20:29:56.839196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.644 [2024-11-26 20:29:56.848713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.644 [2024-11-26 20:29:56.848887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.644 11618.60 IOPS, 90.77 MiB/s [2024-11-26T20:29:56.999Z] [2024-11-26 20:29:56.860859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.644 [2024-11-26 20:29:56.860894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.644 00:08:56.644 Latency(us) 00:08:56.644 [2024-11-26T20:29:56.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.644 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:56.644 Nvme1n1 : 5.01 11621.84 90.80 0.00 0.00 11000.00 4617.31 19779.96 00:08:56.644 [2024-11-26T20:29:56.999Z] =================================================================================================================== 00:08:56.644 [2024-11-26T20:29:56.999Z] Total : 11621.84 90.80 0.00 0.00 11000.00 4617.31 19779.96 00:08:56.644 [2024-11-26 20:29:56.872448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.644 [2024-11-26 20:29:56.872483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.644 [2024-11-26 20:29:56.884433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.644 [2024-11-26 20:29:56.884468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.644 [2024-11-26 20:29:56.896470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.644 [2024-11-26 20:29:56.896510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.644 [2024-11-26 20:29:56.908469] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.644 [2024-11-26 20:29:56.908509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.644 [2024-11-26 20:29:56.920468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.644 [2024-11-26 20:29:56.920517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.644 [2024-11-26 20:29:56.932484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.644 [2024-11-26 20:29:56.932530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.644 [2024-11-26 20:29:56.944474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.644 [2024-11-26 20:29:56.944516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.645 [2024-11-26 20:29:56.956478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.645 [2024-11-26 20:29:56.956521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.645 [2024-11-26 20:29:56.968498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.645 [2024-11-26 20:29:56.968540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.645 [2024-11-26 20:29:56.980487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.645 [2024-11-26 20:29:56.980528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.645 [2024-11-26 20:29:56.992490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.645 [2024-11-26 20:29:56.992539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.903 [2024-11-26 20:29:57.004479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.903 [2024-11-26 20:29:57.004515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.903 [2024-11-26 20:29:57.016491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.903 [2024-11-26 20:29:57.016531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.903 [2024-11-26 20:29:57.028513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.903 [2024-11-26 20:29:57.028560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.903 [2024-11-26 20:29:57.040496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.903 [2024-11-26 20:29:57.040534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.903 [2024-11-26 20:29:57.052494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.903 [2024-11-26 20:29:57.052538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.903 [2024-11-26 20:29:57.064488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.903 [2024-11-26 20:29:57.064531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.903 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65494) - No such process 00:08:56.903 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65494 00:08:56.903 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.903 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.903 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.903 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.903 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:56.903 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.903 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.903 delay0 00:08:56.903 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.903 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:56.903 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.903 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.903 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.903 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:08:57.161 [2024-11-26 20:29:57.269302] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:03.716 Initializing NVMe Controllers 00:09:03.716 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:03.716 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:03.716 Initialization complete. Launching workers. 00:09:03.716 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 69 00:09:03.716 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 356, failed to submit 33 00:09:03.716 success 216, unsuccessful 140, failed 0 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:03.716 rmmod nvme_tcp 00:09:03.716 rmmod nvme_fabrics 00:09:03.716 rmmod nvme_keyring 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65346 ']' 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65346 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65346 ']' 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65346 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65346 00:09:03.716 killing process with pid 65346 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65346' 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65346 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65346 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:03.716 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:03.717 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:03.717 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:03.717 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:03.717 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:03.717 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:03.717 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:03.717 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:03.717 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:03.717 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.717 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.717 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.717 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:09:03.717 00:09:03.717 real 0m24.350s 00:09:03.717 user 0m40.015s 00:09:03.717 sys 0m6.684s 00:09:03.717 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.717 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.717 ************************************ 00:09:03.717 END TEST nvmf_zcopy 00:09:03.717 ************************************ 00:09:03.717 20:30:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:03.717 20:30:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:03.717 20:30:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.717 20:30:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:03.717 ************************************ 00:09:03.717 START TEST nvmf_nmic 00:09:03.717 ************************************ 00:09:03.717 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:03.717 * Looking for test storage... 00:09:03.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:03.717 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:03.717 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:09:03.717 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:03.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.976 --rc genhtml_branch_coverage=1 00:09:03.976 --rc genhtml_function_coverage=1 00:09:03.976 --rc genhtml_legend=1 00:09:03.976 --rc geninfo_all_blocks=1 00:09:03.976 --rc geninfo_unexecuted_blocks=1 00:09:03.976 00:09:03.976 ' 00:09:03.976 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:03.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.976 --rc genhtml_branch_coverage=1 00:09:03.976 --rc genhtml_function_coverage=1 00:09:03.976 --rc genhtml_legend=1 00:09:03.976 --rc geninfo_all_blocks=1 00:09:03.976 --rc geninfo_unexecuted_blocks=1 00:09:03.976 00:09:03.976 ' 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:03.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.977 --rc genhtml_branch_coverage=1 00:09:03.977 --rc genhtml_function_coverage=1 00:09:03.977 --rc genhtml_legend=1 00:09:03.977 --rc geninfo_all_blocks=1 00:09:03.977 --rc geninfo_unexecuted_blocks=1 00:09:03.977 00:09:03.977 ' 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:03.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.977 --rc genhtml_branch_coverage=1 00:09:03.977 --rc genhtml_function_coverage=1 00:09:03.977 --rc genhtml_legend=1 00:09:03.977 --rc geninfo_all_blocks=1 00:09:03.977 --rc geninfo_unexecuted_blocks=1 00:09:03.977 00:09:03.977 ' 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:03.977 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:03.977 Cannot find device "nvmf_init_br" 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:03.977 Cannot find device "nvmf_init_br2" 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:03.977 Cannot find device "nvmf_tgt_br" 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:03.977 Cannot find device "nvmf_tgt_br2" 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:03.977 Cannot find device "nvmf_init_br" 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:03.977 Cannot find device "nvmf_init_br2" 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:09:03.977 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:03.977 Cannot find device "nvmf_tgt_br" 00:09:03.978 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:09:03.978 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:03.978 Cannot find device "nvmf_tgt_br2" 00:09:03.978 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:09:03.978 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:03.978 Cannot find device "nvmf_br" 00:09:03.978 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:09:03.978 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:03.978 Cannot find device "nvmf_init_if" 00:09:03.978 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:09:03.978 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:03.978 Cannot find device "nvmf_init_if2" 00:09:03.978 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:09:03.978 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:03.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:03.978 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:09:03.978 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:03.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:03.978 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:09:03.978 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:03.978 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:03.978 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:03.978 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:04.236 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:04.236 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:04.236 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:04.236 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:04.236 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:04.236 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:04.236 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:04.236 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:04.236 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:04.236 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:04.236 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:04.236 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:04.236 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:04.236 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:04.236 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:04.236 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:04.236 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:04.236 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:04.237 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:04.237 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:04.237 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:04.237 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:04.237 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:04.237 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:04.237 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:04.237 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:04.237 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:04.237 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:04.237 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:04.237 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:04.237 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:09:04.237 00:09:04.237 --- 10.0.0.3 ping statistics --- 00:09:04.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.237 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:09:04.237 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:04.237 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:04.237 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:09:04.237 00:09:04.237 --- 10.0.0.4 ping statistics --- 00:09:04.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.237 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:09:04.237 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:04.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:09:04.237 00:09:04.237 --- 10.0.0.1 ping statistics --- 00:09:04.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.237 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:09:04.237 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:04.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:09:04.237 00:09:04.237 --- 10.0.0.2 ping statistics --- 00:09:04.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.237 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:04.237 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.237 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:09:04.237 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:04.237 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.237 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:04.237 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:04.237 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.237 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:04.237 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:04.496 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:04.496 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:04.496 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:04.496 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:04.496 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65869 00:09:04.496 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:04.496 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65869 00:09:04.496 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 65869 ']' 00:09:04.496 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.496 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.496 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.496 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.496 20:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:04.496 [2024-11-26 20:30:04.669020] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:09:04.496 [2024-11-26 20:30:04.669125] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.496 [2024-11-26 20:30:04.821522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:04.755 [2024-11-26 20:30:04.896137] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.755 [2024-11-26 20:30:04.896199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.755 [2024-11-26 20:30:04.896215] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.755 [2024-11-26 20:30:04.896241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.755 [2024-11-26 20:30:04.896251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.755 [2024-11-26 20:30:04.897493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.755 [2024-11-26 20:30:04.897636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.755 [2024-11-26 20:30:04.897718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.755 [2024-11-26 20:30:04.897720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.755 [2024-11-26 20:30:04.957054] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:04.755 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.755 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:04.755 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:04.755 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:04.755 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:04.755 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.755 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:04.755 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.755 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:04.755 [2024-11-26 20:30:05.078608] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:04.755 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.755 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:04.755 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.755 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:05.014 Malloc0 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:05.014 [2024-11-26 20:30:05.143335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.014 test case1: single bdev can't be used in multiple subsystems 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:05.014 [2024-11-26 20:30:05.167158] bdev.c:8507:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:05.014 [2024-11-26 20:30:05.167195] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:05.014 [2024-11-26 20:30:05.167208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.014 request: 00:09:05.014 { 00:09:05.014 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:05.014 "namespace": { 00:09:05.014 "bdev_name": "Malloc0", 00:09:05.014 "no_auto_visible": false, 00:09:05.014 "hide_metadata": false 00:09:05.014 }, 00:09:05.014 "method": "nvmf_subsystem_add_ns", 00:09:05.014 "req_id": 1 00:09:05.014 } 00:09:05.014 Got JSON-RPC error response 00:09:05.014 response: 00:09:05.014 { 00:09:05.014 "code": -32602, 00:09:05.014 "message": "Invalid parameters" 00:09:05.014 } 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:05.014 Adding namespace failed - expected result. 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:05.014 test case2: host connect to nvmf target in multiple paths 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:05.014 [2024-11-26 20:30:05.179286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid=310b31eb-b117-4685-b95a-c58b48fd3835 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:05.014 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid=310b31eb-b117-4685-b95a-c58b48fd3835 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:09:05.272 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:05.272 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:05.272 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:05.272 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:05.272 20:30:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:07.169 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:07.169 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:07.169 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:07.169 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:07.169 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:07.169 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:07.169 20:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:07.169 [global] 00:09:07.169 thread=1 00:09:07.169 invalidate=1 00:09:07.169 rw=write 00:09:07.169 time_based=1 00:09:07.169 runtime=1 00:09:07.169 ioengine=libaio 00:09:07.169 direct=1 00:09:07.169 bs=4096 00:09:07.169 iodepth=1 00:09:07.169 norandommap=0 00:09:07.169 numjobs=1 00:09:07.169 00:09:07.169 verify_dump=1 00:09:07.169 verify_backlog=512 00:09:07.169 verify_state_save=0 00:09:07.169 do_verify=1 00:09:07.169 verify=crc32c-intel 00:09:07.169 [job0] 00:09:07.169 filename=/dev/nvme0n1 00:09:07.169 Could not set queue depth (nvme0n1) 00:09:07.427 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:07.427 fio-3.35 00:09:07.427 Starting 1 thread 00:09:08.802 00:09:08.802 job0: (groupid=0, jobs=1): err= 0: pid=65949: Tue Nov 26 20:30:08 2024 00:09:08.802 read: IOPS=2645, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec) 00:09:08.802 slat (nsec): min=12580, max=64558, avg=20170.87, stdev=6770.74 00:09:08.802 clat (usec): min=143, max=881, avg=187.24, stdev=25.49 00:09:08.802 lat (usec): min=159, max=903, avg=207.41, stdev=27.81 00:09:08.802 clat percentiles (usec): 00:09:08.802 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 172], 00:09:08.802 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:09:08.802 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 212], 95.00th=[ 223], 00:09:08.802 | 99.00th=[ 239], 99.50th=[ 247], 99.90th=[ 314], 99.95th=[ 742], 00:09:08.802 | 99.99th=[ 881] 00:09:08.802 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:08.802 slat (nsec): min=17758, max=95770, avg=27532.25, stdev=8595.29 00:09:08.802 clat (usec): min=88, max=330, avg=115.18, stdev=16.59 00:09:08.802 lat (usec): min=108, max=426, avg=142.71, stdev=21.98 00:09:08.802 clat percentiles (usec): 00:09:08.802 | 1.00th=[ 92], 5.00th=[ 96], 10.00th=[ 99], 20.00th=[ 102], 00:09:08.802 | 30.00th=[ 105], 40.00th=[ 109], 50.00th=[ 112], 60.00th=[ 116], 00:09:08.802 | 70.00th=[ 121], 80.00th=[ 129], 90.00th=[ 139], 95.00th=[ 147], 00:09:08.802 | 99.00th=[ 163], 99.50th=[ 167], 99.90th=[ 194], 99.95th=[ 249], 00:09:08.802 | 99.99th=[ 330] 00:09:08.802 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:09:08.802 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:08.802 lat (usec) : 100=7.62%, 250=92.17%, 500=0.17%, 750=0.02%, 1000=0.02% 00:09:08.802 cpu : usr=3.50%, sys=10.50%, ctx=5720, majf=0, minf=5 00:09:08.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:08.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.802 issued rwts: total=2648,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:08.802 00:09:08.803 Run status group 0 (all jobs): 00:09:08.803 READ: bw=10.3MiB/s (10.8MB/s), 10.3MiB/s-10.3MiB/s (10.8MB/s-10.8MB/s), io=10.3MiB (10.8MB), run=1001-1001msec 00:09:08.803 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:08.803 00:09:08.803 Disk stats (read/write): 00:09:08.803 nvme0n1: ios=2527/2560, merge=0/0, ticks=480/329, in_queue=809, util=91.58% 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:08.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:08.803 rmmod nvme_tcp 00:09:08.803 rmmod nvme_fabrics 00:09:08.803 rmmod nvme_keyring 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65869 ']' 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65869 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 65869 ']' 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 65869 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.803 20:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65869 00:09:08.803 killing process with pid 65869 00:09:08.803 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.803 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.803 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65869' 00:09:08.803 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 65869 00:09:08.803 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 65869 00:09:09.061 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:09.061 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:09.061 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:09.061 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:09.061 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:09.061 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:09.061 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:09.061 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:09.061 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:09.061 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:09.061 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:09.061 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:09.061 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.061 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:09.061 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:09.061 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:09.061 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:09.061 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:09.061 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:09.061 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:09.320 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.320 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.320 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:09.320 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.320 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.320 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.320 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:09:09.320 00:09:09.320 real 0m5.560s 00:09:09.320 user 0m16.142s 00:09:09.320 sys 0m2.308s 00:09:09.320 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.320 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:09.320 ************************************ 00:09:09.320 END TEST nvmf_nmic 00:09:09.320 ************************************ 00:09:09.320 20:30:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:09.320 20:30:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:09.320 20:30:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.320 20:30:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.320 ************************************ 00:09:09.320 START TEST nvmf_fio_target 00:09:09.320 ************************************ 00:09:09.320 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:09.320 * Looking for test storage... 00:09:09.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:09.320 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:09.320 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:09.320 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:09.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.579 --rc genhtml_branch_coverage=1 00:09:09.579 --rc genhtml_function_coverage=1 00:09:09.579 --rc genhtml_legend=1 00:09:09.579 --rc geninfo_all_blocks=1 00:09:09.579 --rc geninfo_unexecuted_blocks=1 00:09:09.579 00:09:09.579 ' 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:09.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.579 --rc genhtml_branch_coverage=1 00:09:09.579 --rc genhtml_function_coverage=1 00:09:09.579 --rc genhtml_legend=1 00:09:09.579 --rc geninfo_all_blocks=1 00:09:09.579 --rc geninfo_unexecuted_blocks=1 00:09:09.579 00:09:09.579 ' 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:09.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.579 --rc genhtml_branch_coverage=1 00:09:09.579 --rc genhtml_function_coverage=1 00:09:09.579 --rc genhtml_legend=1 00:09:09.579 --rc geninfo_all_blocks=1 00:09:09.579 --rc geninfo_unexecuted_blocks=1 00:09:09.579 00:09:09.579 ' 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:09.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.579 --rc genhtml_branch_coverage=1 00:09:09.579 --rc genhtml_function_coverage=1 00:09:09.579 --rc genhtml_legend=1 00:09:09.579 --rc geninfo_all_blocks=1 00:09:09.579 --rc geninfo_unexecuted_blocks=1 00:09:09.579 00:09:09.579 ' 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.579 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.580 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:09.580 Cannot find device "nvmf_init_br" 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:09.580 Cannot find device "nvmf_init_br2" 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:09.580 Cannot find device "nvmf_tgt_br" 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.580 Cannot find device "nvmf_tgt_br2" 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:09.580 Cannot find device "nvmf_init_br" 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:09.580 Cannot find device "nvmf_init_br2" 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:09.580 Cannot find device "nvmf_tgt_br" 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:09.580 Cannot find device "nvmf_tgt_br2" 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:09.580 Cannot find device "nvmf_br" 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:09.580 Cannot find device "nvmf_init_if" 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:09.580 Cannot find device "nvmf_init_if2" 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.580 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.580 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:09.580 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:09.839 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:09.839 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:09.839 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:09.839 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:09.839 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:09.839 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:09.839 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:09.839 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:09.839 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:09.839 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:09.839 20:30:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:09.839 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:09.839 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:09:09.839 00:09:09.839 --- 10.0.0.3 ping statistics --- 00:09:09.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.839 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:09.839 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:09.839 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:09:09.839 00:09:09.839 --- 10.0.0.4 ping statistics --- 00:09:09.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.839 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:09.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:09:09.839 00:09:09.839 --- 10.0.0.1 ping statistics --- 00:09:09.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.839 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:09.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:09:09.839 00:09:09.839 --- 10.0.0.2 ping statistics --- 00:09:09.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.839 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:09.839 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66188 00:09:09.840 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66188 00:09:09.840 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66188 ']' 00:09:09.840 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.840 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.840 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.840 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.840 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:10.098 [2024-11-26 20:30:10.211842] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:09:10.098 [2024-11-26 20:30:10.211924] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.098 [2024-11-26 20:30:10.357766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.098 [2024-11-26 20:30:10.440904] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.098 [2024-11-26 20:30:10.440998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.098 [2024-11-26 20:30:10.441033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.098 [2024-11-26 20:30:10.441048] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.098 [2024-11-26 20:30:10.441060] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.098 [2024-11-26 20:30:10.442414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.098 [2024-11-26 20:30:10.442513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.098 [2024-11-26 20:30:10.442613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.098 [2024-11-26 20:30:10.442622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.356 [2024-11-26 20:30:10.500734] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:10.356 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.356 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:10.356 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:10.356 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:10.356 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:10.356 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.356 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:10.615 [2024-11-26 20:30:10.922848] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.615 20:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:11.181 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:11.181 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:11.438 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:11.438 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:11.697 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:11.697 20:30:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:11.955 20:30:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:11.955 20:30:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:12.214 20:30:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:12.473 20:30:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:12.473 20:30:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:12.732 20:30:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:12.732 20:30:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:12.991 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:12.991 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:13.250 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:13.508 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:13.508 20:30:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:13.767 20:30:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:13.767 20:30:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:14.026 20:30:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:14.286 [2024-11-26 20:30:14.562758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:14.286 20:30:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:14.545 20:30:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:14.803 20:30:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid=310b31eb-b117-4685-b95a-c58b48fd3835 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:15.062 20:30:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:15.062 20:30:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:15.062 20:30:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:15.062 20:30:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:15.062 20:30:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:15.062 20:30:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:16.990 20:30:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:16.990 20:30:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:16.990 20:30:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:16.990 20:30:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:16.990 20:30:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:16.990 20:30:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:16.990 20:30:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:16.990 [global] 00:09:16.990 thread=1 00:09:16.990 invalidate=1 00:09:16.990 rw=write 00:09:16.990 time_based=1 00:09:16.990 runtime=1 00:09:16.990 ioengine=libaio 00:09:16.990 direct=1 00:09:16.990 bs=4096 00:09:16.990 iodepth=1 00:09:16.990 norandommap=0 00:09:16.990 numjobs=1 00:09:16.990 00:09:16.990 verify_dump=1 00:09:16.990 verify_backlog=512 00:09:16.990 verify_state_save=0 00:09:16.990 do_verify=1 00:09:16.990 verify=crc32c-intel 00:09:16.990 [job0] 00:09:16.990 filename=/dev/nvme0n1 00:09:16.990 [job1] 00:09:16.990 filename=/dev/nvme0n2 00:09:16.990 [job2] 00:09:16.990 filename=/dev/nvme0n3 00:09:16.990 [job3] 00:09:16.990 filename=/dev/nvme0n4 00:09:17.249 Could not set queue depth (nvme0n1) 00:09:17.249 Could not set queue depth (nvme0n2) 00:09:17.249 Could not set queue depth (nvme0n3) 00:09:17.249 Could not set queue depth (nvme0n4) 00:09:17.249 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:17.249 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:17.249 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:17.249 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:17.249 fio-3.35 00:09:17.249 Starting 4 threads 00:09:18.623 00:09:18.623 job0: (groupid=0, jobs=1): err= 0: pid=66371: Tue Nov 26 20:30:18 2024 00:09:18.623 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:18.623 slat (nsec): min=11520, max=43592, avg=13570.94, stdev=2844.09 00:09:18.623 clat (usec): min=133, max=2070, avg=163.97, stdev=45.25 00:09:18.623 lat (usec): min=146, max=2086, avg=177.54, stdev=45.43 00:09:18.623 clat percentiles (usec): 00:09:18.623 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:09:18.623 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:09:18.623 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 178], 95.00th=[ 182], 00:09:18.623 | 99.00th=[ 192], 99.50th=[ 200], 99.90th=[ 437], 99.95th=[ 1565], 00:09:18.623 | 99.99th=[ 2073] 00:09:18.623 write: IOPS=3134, BW=12.2MiB/s (12.8MB/s)(12.3MiB/1001msec); 0 zone resets 00:09:18.623 slat (usec): min=14, max=104, avg=19.34, stdev= 3.40 00:09:18.623 clat (usec): min=93, max=260, avg=122.30, stdev=11.06 00:09:18.623 lat (usec): min=111, max=365, avg=141.64, stdev=11.83 00:09:18.623 clat percentiles (usec): 00:09:18.623 | 1.00th=[ 100], 5.00th=[ 108], 10.00th=[ 111], 20.00th=[ 115], 00:09:18.623 | 30.00th=[ 118], 40.00th=[ 120], 50.00th=[ 122], 60.00th=[ 124], 00:09:18.623 | 70.00th=[ 127], 80.00th=[ 130], 90.00th=[ 137], 95.00th=[ 141], 00:09:18.623 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 202], 99.95th=[ 225], 00:09:18.623 | 99.99th=[ 262] 00:09:18.623 bw ( KiB/s): min=12630, max=12630, per=25.69%, avg=12630.00, stdev= 0.00, samples=1 00:09:18.623 iops : min= 3157, max= 3157, avg=3157.00, stdev= 0.00, samples=1 00:09:18.623 lat (usec) : 100=0.52%, 250=99.39%, 500=0.05%, 750=0.02% 00:09:18.623 lat (msec) : 2=0.02%, 4=0.02% 00:09:18.623 cpu : usr=1.80%, sys=8.60%, ctx=6211, majf=0, minf=11 00:09:18.623 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:18.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.623 issued rwts: total=3072,3138,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.623 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:18.623 job1: (groupid=0, jobs=1): err= 0: pid=66372: Tue Nov 26 20:30:18 2024 00:09:18.623 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(11.9MiB/1001msec) 00:09:18.623 slat (nsec): min=11248, max=50525, avg=13216.47, stdev=2302.94 00:09:18.624 clat (usec): min=132, max=618, avg=167.50, stdev=20.84 00:09:18.624 lat (usec): min=145, max=631, avg=180.72, stdev=20.90 00:09:18.624 clat percentiles (usec): 00:09:18.624 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:09:18.624 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:09:18.624 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 198], 00:09:18.624 | 99.00th=[ 247], 99.50th=[ 262], 99.90th=[ 293], 99.95th=[ 506], 00:09:18.624 | 99.99th=[ 619] 00:09:18.624 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:18.624 slat (usec): min=14, max=101, avg=19.18, stdev= 3.57 00:09:18.624 clat (usec): min=94, max=644, avg=124.26, stdev=14.67 00:09:18.624 lat (usec): min=111, max=662, avg=143.43, stdev=15.33 00:09:18.624 clat percentiles (usec): 00:09:18.624 | 1.00th=[ 104], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 116], 00:09:18.624 | 30.00th=[ 119], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 126], 00:09:18.624 | 70.00th=[ 128], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 143], 00:09:18.624 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 194], 99.95th=[ 277], 00:09:18.624 | 99.99th=[ 644] 00:09:18.624 bw ( KiB/s): min=12536, max=12536, per=25.50%, avg=12536.00, stdev= 0.00, samples=1 00:09:18.624 iops : min= 3134, max= 3134, avg=3134.00, stdev= 0.00, samples=1 00:09:18.624 lat (usec) : 100=0.13%, 250=99.39%, 500=0.43%, 750=0.05% 00:09:18.624 cpu : usr=2.10%, sys=8.00%, ctx=6114, majf=0, minf=9 00:09:18.624 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:18.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.624 issued rwts: total=3042,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.624 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:18.624 job2: (groupid=0, jobs=1): err= 0: pid=66373: Tue Nov 26 20:30:18 2024 00:09:18.624 read: IOPS=2607, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec) 00:09:18.624 slat (nsec): min=11710, max=43332, avg=14897.99, stdev=3281.89 00:09:18.624 clat (usec): min=147, max=418, avg=179.14, stdev=18.46 00:09:18.624 lat (usec): min=162, max=432, avg=194.03, stdev=18.81 00:09:18.624 clat percentiles (usec): 00:09:18.624 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:09:18.624 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 180], 00:09:18.624 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 200], 00:09:18.624 | 99.00th=[ 221], 99.50th=[ 277], 99.90th=[ 412], 99.95th=[ 412], 00:09:18.624 | 99.99th=[ 420] 00:09:18.624 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:18.624 slat (nsec): min=14935, max=87717, avg=22515.18, stdev=6049.00 00:09:18.624 clat (usec): min=105, max=328, avg=134.86, stdev=11.87 00:09:18.624 lat (usec): min=123, max=347, avg=157.37, stdev=13.75 00:09:18.624 clat percentiles (usec): 00:09:18.624 | 1.00th=[ 114], 5.00th=[ 119], 10.00th=[ 122], 20.00th=[ 126], 00:09:18.624 | 30.00th=[ 129], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:09:18.624 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 155], 00:09:18.624 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 190], 99.95th=[ 233], 00:09:18.624 | 99.99th=[ 330] 00:09:18.624 bw ( KiB/s): min=12288, max=12288, per=24.99%, avg=12288.00, stdev= 0.00, samples=1 00:09:18.624 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:18.624 lat (usec) : 250=99.68%, 500=0.32% 00:09:18.624 cpu : usr=1.90%, sys=9.00%, ctx=5682, majf=0, minf=7 00:09:18.624 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:18.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.624 issued rwts: total=2610,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.624 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:18.624 job3: (groupid=0, jobs=1): err= 0: pid=66374: Tue Nov 26 20:30:18 2024 00:09:18.624 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:18.624 slat (nsec): min=11239, max=58815, avg=16788.01, stdev=6698.69 00:09:18.624 clat (usec): min=145, max=1668, avg=180.25, stdev=32.48 00:09:18.624 lat (usec): min=157, max=1681, avg=197.04, stdev=33.70 00:09:18.624 clat percentiles (usec): 00:09:18.624 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:09:18.624 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 182], 00:09:18.624 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 204], 00:09:18.624 | 99.00th=[ 221], 99.50th=[ 229], 99.90th=[ 253], 99.95th=[ 260], 00:09:18.624 | 99.99th=[ 1663] 00:09:18.624 write: IOPS=3017, BW=11.8MiB/s (12.4MB/s)(11.8MiB/1001msec); 0 zone resets 00:09:18.624 slat (nsec): min=13485, max=87240, avg=23175.64, stdev=7930.84 00:09:18.624 clat (usec): min=105, max=521, avg=137.44, stdev=18.33 00:09:18.624 lat (usec): min=123, max=542, avg=160.62, stdev=20.36 00:09:18.624 clat percentiles (usec): 00:09:18.624 | 1.00th=[ 114], 5.00th=[ 119], 10.00th=[ 122], 20.00th=[ 126], 00:09:18.624 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:09:18.624 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 163], 00:09:18.624 | 99.00th=[ 184], 99.50th=[ 194], 99.90th=[ 293], 99.95th=[ 523], 00:09:18.624 | 99.99th=[ 523] 00:09:18.624 bw ( KiB/s): min=12288, max=12288, per=24.99%, avg=12288.00, stdev= 0.00, samples=1 00:09:18.624 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:18.624 lat (usec) : 250=99.86%, 500=0.09%, 750=0.04% 00:09:18.624 lat (msec) : 2=0.02% 00:09:18.624 cpu : usr=2.60%, sys=8.90%, ctx=5587, majf=0, minf=9 00:09:18.624 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:18.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.624 issued rwts: total=2560,3021,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.624 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:18.624 00:09:18.624 Run status group 0 (all jobs): 00:09:18.624 READ: bw=44.0MiB/s (46.2MB/s), 9.99MiB/s-12.0MiB/s (10.5MB/s-12.6MB/s), io=44.1MiB (46.2MB), run=1001-1001msec 00:09:18.624 WRITE: bw=48.0MiB/s (50.3MB/s), 11.8MiB/s-12.2MiB/s (12.4MB/s-12.8MB/s), io=48.1MiB (50.4MB), run=1001-1001msec 00:09:18.624 00:09:18.624 Disk stats (read/write): 00:09:18.624 nvme0n1: ios=2610/2737, merge=0/0, ticks=450/351, in_queue=801, util=86.67% 00:09:18.624 nvme0n2: ios=2599/2649, merge=0/0, ticks=458/346, in_queue=804, util=88.11% 00:09:18.624 nvme0n3: ios=2265/2560, merge=0/0, ticks=406/376, in_queue=782, util=89.13% 00:09:18.624 nvme0n4: ios=2178/2560, merge=0/0, ticks=404/363, in_queue=767, util=89.70% 00:09:18.624 20:30:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:18.624 [global] 00:09:18.624 thread=1 00:09:18.624 invalidate=1 00:09:18.624 rw=randwrite 00:09:18.624 time_based=1 00:09:18.624 runtime=1 00:09:18.624 ioengine=libaio 00:09:18.624 direct=1 00:09:18.624 bs=4096 00:09:18.624 iodepth=1 00:09:18.624 norandommap=0 00:09:18.624 numjobs=1 00:09:18.624 00:09:18.624 verify_dump=1 00:09:18.624 verify_backlog=512 00:09:18.624 verify_state_save=0 00:09:18.624 do_verify=1 00:09:18.624 verify=crc32c-intel 00:09:18.624 [job0] 00:09:18.624 filename=/dev/nvme0n1 00:09:18.624 [job1] 00:09:18.624 filename=/dev/nvme0n2 00:09:18.624 [job2] 00:09:18.624 filename=/dev/nvme0n3 00:09:18.624 [job3] 00:09:18.624 filename=/dev/nvme0n4 00:09:18.624 Could not set queue depth (nvme0n1) 00:09:18.624 Could not set queue depth (nvme0n2) 00:09:18.624 Could not set queue depth (nvme0n3) 00:09:18.624 Could not set queue depth (nvme0n4) 00:09:18.624 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.624 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.624 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.624 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.624 fio-3.35 00:09:18.624 Starting 4 threads 00:09:20.064 00:09:20.064 job0: (groupid=0, jobs=1): err= 0: pid=66427: Tue Nov 26 20:30:20 2024 00:09:20.064 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:20.064 slat (nsec): min=8296, max=47795, avg=14607.03, stdev=5449.88 00:09:20.064 clat (usec): min=194, max=644, avg=240.42, stdev=21.01 00:09:20.064 lat (usec): min=217, max=656, avg=255.03, stdev=22.54 00:09:20.064 clat percentiles (usec): 00:09:20.064 | 1.00th=[ 208], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 229], 00:09:20.064 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 239], 60.00th=[ 243], 00:09:20.064 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 258], 95.00th=[ 265], 00:09:20.064 | 99.00th=[ 289], 99.50th=[ 334], 99.90th=[ 494], 99.95th=[ 537], 00:09:20.064 | 99.99th=[ 644] 00:09:20.064 write: IOPS=2245, BW=8983KiB/s (9199kB/s)(8992KiB/1001msec); 0 zone resets 00:09:20.064 slat (usec): min=10, max=172, avg=19.43, stdev= 7.09 00:09:20.064 clat (usec): min=106, max=978, avg=189.68, stdev=31.23 00:09:20.064 lat (usec): min=139, max=1012, avg=209.11, stdev=32.44 00:09:20.064 clat percentiles (usec): 00:09:20.064 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 174], 00:09:20.064 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:09:20.064 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 210], 95.00th=[ 219], 00:09:20.064 | 99.00th=[ 249], 99.50th=[ 310], 99.90th=[ 529], 99.95th=[ 758], 00:09:20.064 | 99.99th=[ 979] 00:09:20.064 bw ( KiB/s): min= 9176, max= 9176, per=21.58%, avg=9176.00, stdev= 0.00, samples=1 00:09:20.064 iops : min= 2294, max= 2294, avg=2294.00, stdev= 0.00, samples=1 00:09:20.064 lat (usec) : 250=89.90%, 500=9.96%, 750=0.09%, 1000=0.05% 00:09:20.064 cpu : usr=1.70%, sys=6.60%, ctx=4297, majf=0, minf=15 00:09:20.064 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:20.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.064 issued rwts: total=2048,2248,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.064 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:20.064 job1: (groupid=0, jobs=1): err= 0: pid=66428: Tue Nov 26 20:30:20 2024 00:09:20.064 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:20.064 slat (nsec): min=8563, max=42549, avg=12401.42, stdev=4248.04 00:09:20.064 clat (usec): min=172, max=657, avg=242.66, stdev=22.14 00:09:20.064 lat (usec): min=187, max=667, avg=255.06, stdev=22.90 00:09:20.064 clat percentiles (usec): 00:09:20.064 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 229], 00:09:20.064 | 30.00th=[ 235], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:09:20.064 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 269], 00:09:20.064 | 99.00th=[ 293], 99.50th=[ 334], 99.90th=[ 523], 99.95th=[ 545], 00:09:20.064 | 99.99th=[ 660] 00:09:20.064 write: IOPS=2247, BW=8991KiB/s (9207kB/s)(9000KiB/1001msec); 0 zone resets 00:09:20.064 slat (usec): min=10, max=105, avg=18.55, stdev= 6.33 00:09:20.064 clat (usec): min=105, max=998, avg=190.66, stdev=30.41 00:09:20.064 lat (usec): min=150, max=1026, avg=209.21, stdev=31.63 00:09:20.064 clat percentiles (usec): 00:09:20.064 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 176], 00:09:20.064 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 194], 00:09:20.064 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 212], 95.00th=[ 219], 00:09:20.064 | 99.00th=[ 243], 99.50th=[ 314], 99.90th=[ 523], 99.95th=[ 693], 00:09:20.064 | 99.99th=[ 996] 00:09:20.064 bw ( KiB/s): min= 9192, max= 9192, per=21.62%, avg=9192.00, stdev= 0.00, samples=1 00:09:20.064 iops : min= 2298, max= 2298, avg=2298.00, stdev= 0.00, samples=1 00:09:20.064 lat (usec) : 250=87.20%, 500=12.66%, 750=0.12%, 1000=0.02% 00:09:20.064 cpu : usr=1.70%, sys=5.60%, ctx=4299, majf=0, minf=8 00:09:20.064 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:20.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.064 issued rwts: total=2048,2250,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.064 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:20.064 job2: (groupid=0, jobs=1): err= 0: pid=66429: Tue Nov 26 20:30:20 2024 00:09:20.064 read: IOPS=2637, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec) 00:09:20.064 slat (nsec): min=10731, max=57336, avg=13750.67, stdev=4231.64 00:09:20.064 clat (usec): min=145, max=5475, avg=181.34, stdev=136.14 00:09:20.064 lat (usec): min=156, max=5491, avg=195.09, stdev=136.57 00:09:20.064 clat percentiles (usec): 00:09:20.064 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:09:20.064 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:09:20.064 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 206], 00:09:20.064 | 99.00th=[ 243], 99.50th=[ 281], 99.90th=[ 2114], 99.95th=[ 3654], 00:09:20.064 | 99.99th=[ 5473] 00:09:20.064 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:20.064 slat (nsec): min=13640, max=83323, avg=19948.36, stdev=5120.34 00:09:20.064 clat (usec): min=103, max=1047, avg=134.62, stdev=26.41 00:09:20.064 lat (usec): min=120, max=1077, avg=154.57, stdev=27.29 00:09:20.064 clat percentiles (usec): 00:09:20.064 | 1.00th=[ 110], 5.00th=[ 115], 10.00th=[ 118], 20.00th=[ 123], 00:09:20.064 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 135], 60.00th=[ 137], 00:09:20.064 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:09:20.065 | 99.00th=[ 178], 99.50th=[ 192], 99.90th=[ 253], 99.95th=[ 938], 00:09:20.065 | 99.99th=[ 1045] 00:09:20.065 bw ( KiB/s): min=12312, max=12312, per=28.95%, avg=12312.00, stdev= 0.00, samples=1 00:09:20.065 iops : min= 3078, max= 3078, avg=3078.00, stdev= 0.00, samples=1 00:09:20.065 lat (usec) : 250=99.56%, 500=0.26%, 750=0.04%, 1000=0.02% 00:09:20.065 lat (msec) : 2=0.07%, 4=0.04%, 10=0.02% 00:09:20.065 cpu : usr=1.60%, sys=8.50%, ctx=5717, majf=0, minf=11 00:09:20.065 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:20.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.065 issued rwts: total=2640,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.065 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:20.065 job3: (groupid=0, jobs=1): err= 0: pid=66430: Tue Nov 26 20:30:20 2024 00:09:20.065 read: IOPS=2747, BW=10.7MiB/s (11.3MB/s)(10.7MiB/1001msec) 00:09:20.065 slat (nsec): min=11408, max=52853, avg=14456.80, stdev=3823.16 00:09:20.065 clat (usec): min=144, max=532, avg=172.59, stdev=15.43 00:09:20.065 lat (usec): min=157, max=544, avg=187.05, stdev=16.35 00:09:20.065 clat percentiles (usec): 00:09:20.065 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:09:20.065 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:09:20.065 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 196], 00:09:20.065 | 99.00th=[ 208], 99.50th=[ 215], 99.90th=[ 306], 99.95th=[ 437], 00:09:20.065 | 99.99th=[ 529] 00:09:20.065 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:20.065 slat (usec): min=14, max=114, avg=20.63, stdev= 5.10 00:09:20.065 clat (usec): min=102, max=232, avg=134.13, stdev=12.07 00:09:20.065 lat (usec): min=121, max=346, avg=154.76, stdev=13.19 00:09:20.065 clat percentiles (usec): 00:09:20.065 | 1.00th=[ 111], 5.00th=[ 117], 10.00th=[ 120], 20.00th=[ 125], 00:09:20.065 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:09:20.065 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 155], 00:09:20.065 | 99.00th=[ 165], 99.50th=[ 174], 99.90th=[ 208], 99.95th=[ 229], 00:09:20.065 | 99.99th=[ 233] 00:09:20.065 bw ( KiB/s): min=12288, max=12288, per=28.90%, avg=12288.00, stdev= 0.00, samples=1 00:09:20.065 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:20.065 lat (usec) : 250=99.91%, 500=0.07%, 750=0.02% 00:09:20.065 cpu : usr=2.90%, sys=7.60%, ctx=5822, majf=0, minf=15 00:09:20.065 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:20.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.065 issued rwts: total=2750,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.065 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:20.065 00:09:20.065 Run status group 0 (all jobs): 00:09:20.065 READ: bw=37.0MiB/s (38.8MB/s), 8184KiB/s-10.7MiB/s (8380kB/s-11.3MB/s), io=37.1MiB (38.9MB), run=1001-1001msec 00:09:20.065 WRITE: bw=41.5MiB/s (43.5MB/s), 8983KiB/s-12.0MiB/s (9199kB/s-12.6MB/s), io=41.6MiB (43.6MB), run=1001-1001msec 00:09:20.065 00:09:20.065 Disk stats (read/write): 00:09:20.065 nvme0n1: ios=1730/2048, merge=0/0, ticks=418/364, in_queue=782, util=88.58% 00:09:20.065 nvme0n2: ios=1729/2048, merge=0/0, ticks=409/371, in_queue=780, util=88.47% 00:09:20.065 nvme0n3: ios=2355/2560, merge=0/0, ticks=426/359, in_queue=785, util=88.67% 00:09:20.065 nvme0n4: ios=2461/2560, merge=0/0, ticks=437/354, in_queue=791, util=89.83% 00:09:20.065 20:30:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:20.065 [global] 00:09:20.065 thread=1 00:09:20.065 invalidate=1 00:09:20.065 rw=write 00:09:20.065 time_based=1 00:09:20.065 runtime=1 00:09:20.065 ioengine=libaio 00:09:20.065 direct=1 00:09:20.065 bs=4096 00:09:20.065 iodepth=128 00:09:20.065 norandommap=0 00:09:20.065 numjobs=1 00:09:20.065 00:09:20.065 verify_dump=1 00:09:20.065 verify_backlog=512 00:09:20.065 verify_state_save=0 00:09:20.065 do_verify=1 00:09:20.065 verify=crc32c-intel 00:09:20.065 [job0] 00:09:20.065 filename=/dev/nvme0n1 00:09:20.065 [job1] 00:09:20.065 filename=/dev/nvme0n2 00:09:20.065 [job2] 00:09:20.065 filename=/dev/nvme0n3 00:09:20.065 [job3] 00:09:20.065 filename=/dev/nvme0n4 00:09:20.065 Could not set queue depth (nvme0n1) 00:09:20.065 Could not set queue depth (nvme0n2) 00:09:20.065 Could not set queue depth (nvme0n3) 00:09:20.065 Could not set queue depth (nvme0n4) 00:09:20.065 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:20.065 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:20.065 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:20.065 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:20.065 fio-3.35 00:09:20.065 Starting 4 threads 00:09:21.442 00:09:21.442 job0: (groupid=0, jobs=1): err= 0: pid=66491: Tue Nov 26 20:30:21 2024 00:09:21.442 read: IOPS=4052, BW=15.8MiB/s (16.6MB/s)(15.9MiB/1003msec) 00:09:21.442 slat (usec): min=4, max=8008, avg=129.02, stdev=658.33 00:09:21.442 clat (usec): min=1446, max=33970, avg=16566.97, stdev=4334.03 00:09:21.442 lat (usec): min=4468, max=33989, avg=16695.99, stdev=4322.43 00:09:21.442 clat percentiles (usec): 00:09:21.442 | 1.00th=[ 8455], 5.00th=[12256], 10.00th=[13173], 20.00th=[14091], 00:09:21.442 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14615], 60.00th=[15008], 00:09:21.442 | 70.00th=[17695], 80.00th=[21103], 90.00th=[21890], 95.00th=[22414], 00:09:21.442 | 99.00th=[33817], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:09:21.442 | 99.99th=[33817] 00:09:21.442 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:09:21.442 slat (usec): min=11, max=5979, avg=107.21, stdev=480.09 00:09:21.442 clat (usec): min=9106, max=29251, avg=14404.34, stdev=3603.97 00:09:21.442 lat (usec): min=10834, max=29296, avg=14511.55, stdev=3588.59 00:09:21.442 clat percentiles (usec): 00:09:21.442 | 1.00th=[ 9634], 5.00th=[11207], 10.00th=[11207], 20.00th=[11338], 00:09:21.442 | 30.00th=[11600], 40.00th=[11994], 50.00th=[13435], 60.00th=[15139], 00:09:21.442 | 70.00th=[15664], 80.00th=[16057], 90.00th=[19792], 95.00th=[21365], 00:09:21.442 | 99.00th=[28181], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:09:21.442 | 99.99th=[29230] 00:09:21.442 bw ( KiB/s): min=16384, max=16416, per=31.15%, avg=16400.00, stdev=22.63, samples=2 00:09:21.442 iops : min= 4096, max= 4104, avg=4100.00, stdev= 5.66, samples=2 00:09:21.442 lat (msec) : 2=0.01%, 10=1.48%, 20=82.09%, 50=16.42% 00:09:21.442 cpu : usr=4.09%, sys=12.97%, ctx=259, majf=0, minf=9 00:09:21.442 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:21.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:21.442 issued rwts: total=4065,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:21.442 job1: (groupid=0, jobs=1): err= 0: pid=66492: Tue Nov 26 20:30:21 2024 00:09:21.442 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:09:21.442 slat (usec): min=6, max=7421, avg=155.23, stdev=640.65 00:09:21.442 clat (usec): min=12725, max=35986, avg=19701.66, stdev=3816.78 00:09:21.442 lat (usec): min=14115, max=38698, avg=19856.89, stdev=3878.67 00:09:21.442 clat percentiles (usec): 00:09:21.442 | 1.00th=[14615], 5.00th=[15795], 10.00th=[15926], 20.00th=[16319], 00:09:21.442 | 30.00th=[16581], 40.00th=[17171], 50.00th=[19268], 60.00th=[20317], 00:09:21.442 | 70.00th=[21890], 80.00th=[23200], 90.00th=[23725], 95.00th=[25822], 00:09:21.442 | 99.00th=[33162], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:09:21.442 | 99.99th=[35914] 00:09:21.442 write: IOPS=2926, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1004msec); 0 zone resets 00:09:21.442 slat (usec): min=9, max=5667, avg=195.98, stdev=644.47 00:09:21.442 clat (usec): min=3267, max=46184, avg=25985.15, stdev=9826.93 00:09:21.442 lat (usec): min=6010, max=46210, avg=26181.13, stdev=9882.89 00:09:21.442 clat percentiles (usec): 00:09:21.442 | 1.00th=[10683], 5.00th=[11731], 10.00th=[13829], 20.00th=[15533], 00:09:21.442 | 30.00th=[16581], 40.00th=[23200], 50.00th=[28181], 60.00th=[28705], 00:09:21.442 | 70.00th=[31065], 80.00th=[35390], 90.00th=[39584], 95.00th=[41681], 00:09:21.442 | 99.00th=[45351], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:09:21.442 | 99.99th=[46400] 00:09:21.442 bw ( KiB/s): min=10208, max=12280, per=21.36%, avg=11244.00, stdev=1465.13, samples=2 00:09:21.442 iops : min= 2552, max= 3070, avg=2811.00, stdev=366.28, samples=2 00:09:21.442 lat (msec) : 4=0.02%, 10=0.44%, 20=45.23%, 50=54.31% 00:09:21.442 cpu : usr=3.09%, sys=9.87%, ctx=410, majf=0, minf=10 00:09:21.442 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:21.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:21.442 issued rwts: total=2560,2938,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:21.442 job2: (groupid=0, jobs=1): err= 0: pid=66493: Tue Nov 26 20:30:21 2024 00:09:21.442 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:09:21.442 slat (usec): min=5, max=8353, avg=168.56, stdev=732.73 00:09:21.442 clat (usec): min=14442, max=40012, avg=21623.14, stdev=3861.66 00:09:21.442 lat (usec): min=14462, max=40024, avg=21791.70, stdev=3932.59 00:09:21.442 clat percentiles (usec): 00:09:21.442 | 1.00th=[15664], 5.00th=[17957], 10.00th=[18482], 20.00th=[18744], 00:09:21.442 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19530], 60.00th=[21103], 00:09:21.442 | 70.00th=[23462], 80.00th=[25822], 90.00th=[27132], 95.00th=[27657], 00:09:21.442 | 99.00th=[34341], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:09:21.442 | 99.99th=[40109] 00:09:21.442 write: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1004msec); 0 zone resets 00:09:21.442 slat (usec): min=11, max=7179, avg=178.44, stdev=724.67 00:09:21.442 clat (usec): min=3427, max=53484, avg=23416.07, stdev=10855.01 00:09:21.442 lat (usec): min=6597, max=53515, avg=23594.51, stdev=10928.41 00:09:21.442 clat percentiles (usec): 00:09:21.442 | 1.00th=[11863], 5.00th=[13173], 10.00th=[13566], 20.00th=[15533], 00:09:21.442 | 30.00th=[15926], 40.00th=[18482], 50.00th=[18744], 60.00th=[19268], 00:09:21.442 | 70.00th=[26346], 80.00th=[32900], 90.00th=[41157], 95.00th=[47449], 00:09:21.442 | 99.00th=[52167], 99.50th=[53216], 99.90th=[53740], 99.95th=[53740], 00:09:21.442 | 99.99th=[53740] 00:09:21.442 bw ( KiB/s): min=11032, max=12312, per=22.17%, avg=11672.00, stdev=905.10, samples=2 00:09:21.442 iops : min= 2758, max= 3078, avg=2918.00, stdev=226.27, samples=2 00:09:21.442 lat (msec) : 4=0.02%, 10=0.14%, 20=59.75%, 50=38.36%, 100=1.73% 00:09:21.442 cpu : usr=2.09%, sys=10.27%, ctx=289, majf=0, minf=13 00:09:21.442 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:21.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:21.442 issued rwts: total=2560,3042,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:21.443 job3: (groupid=0, jobs=1): err= 0: pid=66494: Tue Nov 26 20:30:21 2024 00:09:21.443 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:09:21.443 slat (usec): min=3, max=9387, avg=177.13, stdev=938.57 00:09:21.443 clat (usec): min=11451, max=38972, avg=22370.68, stdev=6811.26 00:09:21.443 lat (usec): min=11506, max=38989, avg=22547.81, stdev=6807.44 00:09:21.443 clat percentiles (usec): 00:09:21.443 | 1.00th=[12125], 5.00th=[14615], 10.00th=[15926], 20.00th=[16450], 00:09:21.443 | 30.00th=[16712], 40.00th=[17171], 50.00th=[21627], 60.00th=[24773], 00:09:21.443 | 70.00th=[25297], 80.00th=[25822], 90.00th=[34341], 95.00th=[38011], 00:09:21.443 | 99.00th=[39060], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:09:21.443 | 99.99th=[39060] 00:09:21.443 write: IOPS=3130, BW=12.2MiB/s (12.8MB/s)(12.3MiB/1002msec); 0 zone resets 00:09:21.443 slat (usec): min=12, max=5848, avg=135.91, stdev=642.31 00:09:21.443 clat (usec): min=872, max=31796, avg=18235.50, stdev=5177.35 00:09:21.443 lat (usec): min=3221, max=31851, avg=18371.41, stdev=5156.08 00:09:21.443 clat percentiles (usec): 00:09:21.443 | 1.00th=[ 4293], 5.00th=[12780], 10.00th=[12911], 20.00th=[13435], 00:09:21.443 | 30.00th=[14222], 40.00th=[16909], 50.00th=[17695], 60.00th=[18220], 00:09:21.443 | 70.00th=[20317], 80.00th=[24773], 90.00th=[25297], 95.00th=[26084], 00:09:21.443 | 99.00th=[29230], 99.50th=[31851], 99.90th=[31851], 99.95th=[31851], 00:09:21.443 | 99.99th=[31851] 00:09:21.443 bw ( KiB/s): min=12288, max=12312, per=23.37%, avg=12300.00, stdev=16.97, samples=2 00:09:21.443 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:09:21.443 lat (usec) : 1000=0.02% 00:09:21.443 lat (msec) : 4=0.34%, 10=0.71%, 20=56.87%, 50=42.07% 00:09:21.443 cpu : usr=3.40%, sys=10.29%, ctx=197, majf=0, minf=7 00:09:21.443 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:21.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:21.443 issued rwts: total=3072,3137,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.443 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:21.443 00:09:21.443 Run status group 0 (all jobs): 00:09:21.443 READ: bw=47.7MiB/s (50.0MB/s), 9.96MiB/s-15.8MiB/s (10.4MB/s-16.6MB/s), io=47.9MiB (50.2MB), run=1002-1004msec 00:09:21.443 WRITE: bw=51.4MiB/s (53.9MB/s), 11.4MiB/s-16.0MiB/s (12.0MB/s-16.7MB/s), io=51.6MiB (54.1MB), run=1002-1004msec 00:09:21.443 00:09:21.443 Disk stats (read/write): 00:09:21.443 nvme0n1: ios=3474/3584, merge=0/0, ticks=13836/11023, in_queue=24859, util=89.18% 00:09:21.443 nvme0n2: ios=2334/2560, merge=0/0, ticks=14845/20244, in_queue=35089, util=89.61% 00:09:21.443 nvme0n3: ios=2560/2591, merge=0/0, ticks=18075/15989, in_queue=34064, util=89.46% 00:09:21.443 nvme0n4: ios=2560/2560, merge=0/0, ticks=14856/10622, in_queue=25478, util=90.75% 00:09:21.443 20:30:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:21.443 [global] 00:09:21.443 thread=1 00:09:21.443 invalidate=1 00:09:21.443 rw=randwrite 00:09:21.443 time_based=1 00:09:21.443 runtime=1 00:09:21.443 ioengine=libaio 00:09:21.443 direct=1 00:09:21.443 bs=4096 00:09:21.443 iodepth=128 00:09:21.443 norandommap=0 00:09:21.443 numjobs=1 00:09:21.443 00:09:21.443 verify_dump=1 00:09:21.443 verify_backlog=512 00:09:21.443 verify_state_save=0 00:09:21.443 do_verify=1 00:09:21.443 verify=crc32c-intel 00:09:21.443 [job0] 00:09:21.443 filename=/dev/nvme0n1 00:09:21.443 [job1] 00:09:21.443 filename=/dev/nvme0n2 00:09:21.443 [job2] 00:09:21.443 filename=/dev/nvme0n3 00:09:21.443 [job3] 00:09:21.443 filename=/dev/nvme0n4 00:09:21.443 Could not set queue depth (nvme0n1) 00:09:21.443 Could not set queue depth (nvme0n2) 00:09:21.443 Could not set queue depth (nvme0n3) 00:09:21.443 Could not set queue depth (nvme0n4) 00:09:21.443 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:21.443 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:21.443 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:21.443 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:21.443 fio-3.35 00:09:21.443 Starting 4 threads 00:09:22.819 00:09:22.819 job0: (groupid=0, jobs=1): err= 0: pid=66547: Tue Nov 26 20:30:22 2024 00:09:22.819 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:09:22.819 slat (usec): min=8, max=17524, avg=181.71, stdev=1269.91 00:09:22.819 clat (usec): min=13475, max=43548, avg=25695.64, stdev=4931.58 00:09:22.819 lat (usec): min=13491, max=52548, avg=25877.35, stdev=4987.87 00:09:22.819 clat percentiles (usec): 00:09:22.819 | 1.00th=[15008], 5.00th=[18744], 10.00th=[19792], 20.00th=[20579], 00:09:22.819 | 30.00th=[21103], 40.00th=[25035], 50.00th=[27395], 60.00th=[28181], 00:09:22.820 | 70.00th=[28705], 80.00th=[28967], 90.00th=[31851], 95.00th=[32375], 00:09:22.820 | 99.00th=[36439], 99.50th=[40633], 99.90th=[43779], 99.95th=[43779], 00:09:22.820 | 99.99th=[43779] 00:09:22.820 write: IOPS=2798, BW=10.9MiB/s (11.5MB/s)(11.0MiB/1006msec); 0 zone resets 00:09:22.820 slat (usec): min=7, max=25052, avg=182.19, stdev=1272.98 00:09:22.820 clat (usec): min=1242, max=40707, avg=21926.97, stdev=6397.56 00:09:22.820 lat (usec): min=8867, max=40733, avg=22109.16, stdev=6334.10 00:09:22.820 clat percentiles (usec): 00:09:22.820 | 1.00th=[ 9372], 5.00th=[13304], 10.00th=[14877], 20.00th=[15795], 00:09:22.820 | 30.00th=[16712], 40.00th=[18744], 50.00th=[23462], 60.00th=[24511], 00:09:22.820 | 70.00th=[25822], 80.00th=[27657], 90.00th=[28705], 95.00th=[30540], 00:09:22.820 | 99.00th=[40109], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633], 00:09:22.820 | 99.99th=[40633] 00:09:22.820 bw ( KiB/s): min= 9720, max=11776, per=22.54%, avg=10748.00, stdev=1453.81, samples=2 00:09:22.820 iops : min= 2430, max= 2944, avg=2687.00, stdev=363.45, samples=2 00:09:22.820 lat (msec) : 2=0.02%, 10=1.02%, 20=28.43%, 50=70.53% 00:09:22.820 cpu : usr=2.79%, sys=8.96%, ctx=116, majf=0, minf=9 00:09:22.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:09:22.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:22.820 issued rwts: total=2560,2815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.820 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:22.820 job1: (groupid=0, jobs=1): err= 0: pid=66548: Tue Nov 26 20:30:22 2024 00:09:22.820 read: IOPS=1653, BW=6612KiB/s (6771kB/s)(6672KiB/1009msec) 00:09:22.820 slat (usec): min=4, max=35488, avg=241.44, stdev=1615.21 00:09:22.820 clat (usec): min=4156, max=92314, avg=31597.73, stdev=12623.16 00:09:22.820 lat (usec): min=12437, max=93470, avg=31839.17, stdev=12633.94 00:09:22.820 clat percentiles (usec): 00:09:22.820 | 1.00th=[16319], 5.00th=[21890], 10.00th=[25560], 20.00th=[27132], 00:09:22.820 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28443], 60.00th=[28705], 00:09:22.820 | 70.00th=[28967], 80.00th=[31065], 90.00th=[43779], 95.00th=[66323], 00:09:22.820 | 99.00th=[89654], 99.50th=[90702], 99.90th=[92799], 99.95th=[92799], 00:09:22.820 | 99.99th=[92799] 00:09:22.820 write: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec); 0 zone resets 00:09:22.820 slat (usec): min=6, max=24680, avg=284.92, stdev=1582.55 00:09:22.820 clat (msec): min=11, max=106, avg=36.68, stdev=20.35 00:09:22.820 lat (msec): min=11, max=106, avg=36.96, stdev=20.43 00:09:22.820 clat percentiles (msec): 00:09:22.820 | 1.00th=[ 16], 5.00th=[ 23], 10.00th=[ 24], 20.00th=[ 25], 00:09:22.820 | 30.00th=[ 26], 40.00th=[ 27], 50.00th=[ 29], 60.00th=[ 29], 00:09:22.820 | 70.00th=[ 31], 80.00th=[ 52], 90.00th=[ 70], 95.00th=[ 84], 00:09:22.820 | 99.00th=[ 103], 99.50th=[ 106], 99.90th=[ 107], 99.95th=[ 107], 00:09:22.820 | 99.99th=[ 107] 00:09:22.820 bw ( KiB/s): min= 6992, max= 9373, per=17.16%, avg=8182.50, stdev=1683.62, samples=2 00:09:22.820 iops : min= 1748, max= 2343, avg=2045.50, stdev=420.73, samples=2 00:09:22.820 lat (msec) : 10=0.03%, 20=3.53%, 50=82.02%, 100=13.48%, 250=0.94% 00:09:22.820 cpu : usr=1.79%, sys=6.15%, ctx=152, majf=0, minf=11 00:09:22.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:09:22.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:22.820 issued rwts: total=1668,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.820 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:22.820 job2: (groupid=0, jobs=1): err= 0: pid=66549: Tue Nov 26 20:30:22 2024 00:09:22.820 read: IOPS=2604, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1007msec) 00:09:22.820 slat (usec): min=11, max=14302, avg=171.63, stdev=1155.87 00:09:22.820 clat (usec): min=1058, max=40583, avg=23272.12, stdev=3981.60 00:09:22.820 lat (usec): min=7052, max=45683, avg=23443.76, stdev=4014.10 00:09:22.820 clat percentiles (usec): 00:09:22.820 | 1.00th=[ 8029], 5.00th=[15401], 10.00th=[20579], 20.00th=[21365], 00:09:22.820 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23725], 00:09:22.820 | 70.00th=[24773], 80.00th=[25560], 90.00th=[27132], 95.00th=[28705], 00:09:22.820 | 99.00th=[34866], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:09:22.820 | 99.99th=[40633] 00:09:22.820 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:09:22.820 slat (usec): min=8, max=19712, avg=171.42, stdev=1143.49 00:09:22.820 clat (usec): min=9420, max=33882, avg=21654.54, stdev=3375.44 00:09:22.820 lat (usec): min=9501, max=33904, avg=21825.96, stdev=3233.68 00:09:22.820 clat percentiles (usec): 00:09:22.820 | 1.00th=[13173], 5.00th=[17695], 10.00th=[17957], 20.00th=[19530], 00:09:22.820 | 30.00th=[20055], 40.00th=[20841], 50.00th=[21103], 60.00th=[21627], 00:09:22.820 | 70.00th=[22676], 80.00th=[23200], 90.00th=[25560], 95.00th=[28181], 00:09:22.820 | 99.00th=[33424], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:09:22.820 | 99.99th=[33817] 00:09:22.820 bw ( KiB/s): min=11768, max=12288, per=25.22%, avg=12028.00, stdev=367.70, samples=2 00:09:22.820 iops : min= 2942, max= 3072, avg=3007.00, stdev=91.92, samples=2 00:09:22.820 lat (msec) : 2=0.02%, 10=1.12%, 20=17.31%, 50=81.55% 00:09:22.820 cpu : usr=2.88%, sys=9.74%, ctx=134, majf=0, minf=9 00:09:22.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:22.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:22.820 issued rwts: total=2623,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.820 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:22.820 job3: (groupid=0, jobs=1): err= 0: pid=66550: Tue Nov 26 20:30:22 2024 00:09:22.820 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:09:22.820 slat (usec): min=10, max=9452, avg=124.00, stdev=793.07 00:09:22.820 clat (usec): min=9497, max=28441, avg=17317.26, stdev=2153.45 00:09:22.820 lat (usec): min=9512, max=34115, avg=17441.26, stdev=2189.17 00:09:22.820 clat percentiles (usec): 00:09:22.820 | 1.00th=[10683], 5.00th=[15139], 10.00th=[15401], 20.00th=[16057], 00:09:22.820 | 30.00th=[16581], 40.00th=[17171], 50.00th=[17433], 60.00th=[17695], 00:09:22.820 | 70.00th=[17695], 80.00th=[18482], 90.00th=[19530], 95.00th=[20317], 00:09:22.820 | 99.00th=[26346], 99.50th=[27132], 99.90th=[28443], 99.95th=[28443], 00:09:22.820 | 99.99th=[28443] 00:09:22.820 write: IOPS=4090, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:09:22.820 slat (usec): min=6, max=12973, avg=126.79, stdev=784.14 00:09:22.820 clat (usec): min=664, max=22933, avg=15823.74, stdev=2221.63 00:09:22.820 lat (usec): min=4943, max=23105, avg=15950.53, stdev=2114.75 00:09:22.820 clat percentiles (usec): 00:09:22.820 | 1.00th=[ 6652], 5.00th=[13304], 10.00th=[14091], 20.00th=[14615], 00:09:22.820 | 30.00th=[15008], 40.00th=[15533], 50.00th=[15795], 60.00th=[16188], 00:09:22.820 | 70.00th=[16712], 80.00th=[17171], 90.00th=[17957], 95.00th=[18220], 00:09:22.820 | 99.00th=[22676], 99.50th=[22676], 99.90th=[22938], 99.95th=[22938], 00:09:22.820 | 99.99th=[22938] 00:09:22.820 bw ( KiB/s): min=16384, max=16384, per=34.35%, avg=16384.00, stdev= 0.00, samples=1 00:09:22.820 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:22.820 lat (usec) : 750=0.01% 00:09:22.820 lat (msec) : 10=1.48%, 20=94.02%, 50=4.48% 00:09:22.820 cpu : usr=3.70%, sys=13.00%, ctx=162, majf=0, minf=12 00:09:22.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:22.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:22.820 issued rwts: total=3584,4095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.820 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:22.820 00:09:22.820 Run status group 0 (all jobs): 00:09:22.820 READ: bw=40.4MiB/s (42.4MB/s), 6612KiB/s-14.0MiB/s (6771kB/s-14.7MB/s), io=40.8MiB (42.7MB), run=1001-1009msec 00:09:22.820 WRITE: bw=46.6MiB/s (48.8MB/s), 8119KiB/s-16.0MiB/s (8314kB/s-16.8MB/s), io=47.0MiB (49.3MB), run=1001-1009msec 00:09:22.820 00:09:22.820 Disk stats (read/write): 00:09:22.820 nvme0n1: ios=2098/2368, merge=0/0, ticks=51871/51771, in_queue=103642, util=88.78% 00:09:22.820 nvme0n2: ios=1585/1847, merge=0/0, ticks=42341/57118, in_queue=99459, util=88.90% 00:09:22.820 nvme0n3: ios=2184/2560, merge=0/0, ticks=49949/52475, in_queue=102424, util=89.20% 00:09:22.820 nvme0n4: ios=3072/3520, merge=0/0, ticks=49770/51921, in_queue=101691, util=89.75% 00:09:22.820 20:30:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:22.820 20:30:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66567 00:09:22.820 20:30:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:22.820 20:30:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:22.820 [global] 00:09:22.820 thread=1 00:09:22.820 invalidate=1 00:09:22.820 rw=read 00:09:22.820 time_based=1 00:09:22.820 runtime=10 00:09:22.820 ioengine=libaio 00:09:22.820 direct=1 00:09:22.820 bs=4096 00:09:22.820 iodepth=1 00:09:22.820 norandommap=1 00:09:22.820 numjobs=1 00:09:22.820 00:09:22.820 [job0] 00:09:22.820 filename=/dev/nvme0n1 00:09:22.820 [job1] 00:09:22.820 filename=/dev/nvme0n2 00:09:22.820 [job2] 00:09:22.820 filename=/dev/nvme0n3 00:09:22.820 [job3] 00:09:22.820 filename=/dev/nvme0n4 00:09:22.820 Could not set queue depth (nvme0n1) 00:09:22.820 Could not set queue depth (nvme0n2) 00:09:22.820 Could not set queue depth (nvme0n3) 00:09:22.820 Could not set queue depth (nvme0n4) 00:09:22.820 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.820 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.821 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.821 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.821 fio-3.35 00:09:22.821 Starting 4 threads 00:09:26.102 20:30:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:26.102 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=41308160, buflen=4096 00:09:26.102 fio: pid=66611, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:26.102 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:26.102 fio: pid=66610, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:26.102 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=46899200, buflen=4096 00:09:26.102 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:26.102 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:26.360 fio: pid=66608, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:26.360 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=50458624, buflen=4096 00:09:26.617 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:26.617 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:26.617 fio: pid=66609, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:26.617 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=56819712, buflen=4096 00:09:26.875 00:09:26.875 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66608: Tue Nov 26 20:30:26 2024 00:09:26.875 read: IOPS=3530, BW=13.8MiB/s (14.5MB/s)(48.1MiB/3490msec) 00:09:26.875 slat (usec): min=7, max=11420, avg=15.11, stdev=152.67 00:09:26.875 clat (usec): min=131, max=97323, avg=266.86, stdev=876.58 00:09:26.875 lat (usec): min=144, max=97333, avg=281.96, stdev=889.62 00:09:26.875 clat percentiles (usec): 00:09:26.875 | 1.00th=[ 143], 5.00th=[ 155], 10.00th=[ 198], 20.00th=[ 237], 00:09:26.875 | 30.00th=[ 249], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:09:26.875 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 306], 95.00th=[ 318], 00:09:26.875 | 99.00th=[ 363], 99.50th=[ 379], 99.90th=[ 758], 99.95th=[ 1303], 00:09:26.875 | 99.99th=[ 2868] 00:09:26.875 bw ( KiB/s): min=12822, max=14856, per=27.65%, avg=13986.33, stdev=690.12, samples=6 00:09:26.875 iops : min= 3205, max= 3714, avg=3496.50, stdev=172.70, samples=6 00:09:26.875 lat (usec) : 250=31.53%, 500=68.29%, 750=0.06%, 1000=0.04% 00:09:26.875 lat (msec) : 2=0.04%, 4=0.02%, 100=0.01% 00:09:26.875 cpu : usr=0.97%, sys=3.90%, ctx=12329, majf=0, minf=1 00:09:26.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:26.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.875 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.875 issued rwts: total=12320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:26.875 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66609: Tue Nov 26 20:30:26 2024 00:09:26.875 read: IOPS=3675, BW=14.4MiB/s (15.1MB/s)(54.2MiB/3774msec) 00:09:26.875 slat (usec): min=8, max=14847, avg=18.95, stdev=213.78 00:09:26.875 clat (usec): min=127, max=3345, avg=251.62, stdev=75.61 00:09:26.875 lat (usec): min=138, max=15175, avg=270.57, stdev=227.39 00:09:26.875 clat percentiles (usec): 00:09:26.875 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 155], 20.00th=[ 225], 00:09:26.875 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 262], 00:09:26.875 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 326], 00:09:26.875 | 99.00th=[ 416], 99.50th=[ 478], 99.90th=[ 881], 99.95th=[ 1467], 00:09:26.875 | 99.99th=[ 3064] 00:09:26.875 bw ( KiB/s): min=12816, max=16398, per=28.13%, avg=14228.29, stdev=1187.05, samples=7 00:09:26.875 iops : min= 3204, max= 4099, avg=3557.00, stdev=296.61, samples=7 00:09:26.875 lat (usec) : 250=41.84%, 500=57.70%, 750=0.32%, 1000=0.04% 00:09:26.875 lat (msec) : 2=0.06%, 4=0.02% 00:09:26.875 cpu : usr=1.11%, sys=4.66%, ctx=13891, majf=0, minf=2 00:09:26.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:26.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.875 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.875 issued rwts: total=13873,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:26.875 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66610: Tue Nov 26 20:30:26 2024 00:09:26.875 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(44.7MiB/3230msec) 00:09:26.875 slat (usec): min=10, max=8832, avg=17.77, stdev=103.05 00:09:26.875 clat (usec): min=142, max=3364, avg=262.62, stdev=63.64 00:09:26.875 lat (usec): min=155, max=9083, avg=280.39, stdev=121.83 00:09:26.875 clat percentiles (usec): 00:09:26.875 | 1.00th=[ 169], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 241], 00:09:26.875 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 265], 00:09:26.875 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 314], 00:09:26.875 | 99.00th=[ 371], 99.50th=[ 478], 99.90th=[ 840], 99.95th=[ 1369], 00:09:26.875 | 99.99th=[ 3195] 00:09:26.875 bw ( KiB/s): min=12926, max=14880, per=28.07%, avg=14201.00, stdev=750.49, samples=6 00:09:26.875 iops : min= 3231, max= 3720, avg=3550.17, stdev=187.79, samples=6 00:09:26.875 lat (usec) : 250=36.01%, 500=63.53%, 750=0.35%, 1000=0.04% 00:09:26.875 lat (msec) : 2=0.03%, 4=0.03% 00:09:26.875 cpu : usr=1.30%, sys=4.96%, ctx=11458, majf=0, minf=1 00:09:26.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:26.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.875 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.875 issued rwts: total=11451,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:26.875 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66611: Tue Nov 26 20:30:26 2024 00:09:26.875 read: IOPS=3386, BW=13.2MiB/s (13.9MB/s)(39.4MiB/2978msec) 00:09:26.875 slat (usec): min=11, max=212, avg=15.26, stdev= 4.89 00:09:26.875 clat (usec): min=165, max=6475, avg=278.31, stdev=140.26 00:09:26.875 lat (usec): min=182, max=6487, avg=293.57, stdev=140.92 00:09:26.875 clat percentiles (usec): 00:09:26.875 | 1.00th=[ 229], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 251], 00:09:26.875 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:09:26.875 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 306], 95.00th=[ 322], 00:09:26.875 | 99.00th=[ 400], 99.50th=[ 433], 99.90th=[ 2311], 99.95th=[ 3621], 00:09:26.875 | 99.99th=[ 5145] 00:09:26.875 bw ( KiB/s): min=12686, max=14768, per=26.88%, avg=13599.60, stdev=757.12, samples=5 00:09:26.875 iops : min= 3171, max= 3692, avg=3399.80, stdev=189.43, samples=5 00:09:26.875 lat (usec) : 250=18.96%, 500=80.77%, 750=0.04%, 1000=0.06% 00:09:26.875 lat (msec) : 2=0.03%, 4=0.09%, 10=0.05% 00:09:26.875 cpu : usr=0.97%, sys=4.43%, ctx=10089, majf=0, minf=1 00:09:26.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:26.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.876 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.876 issued rwts: total=10086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.876 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:26.876 00:09:26.876 Run status group 0 (all jobs): 00:09:26.876 READ: bw=49.4MiB/s (51.8MB/s), 13.2MiB/s-14.4MiB/s (13.9MB/s-15.1MB/s), io=186MiB (195MB), run=2978-3774msec 00:09:26.876 00:09:26.876 Disk stats (read/write): 00:09:26.876 nvme0n1: ios=11781/0, merge=0/0, ticks=3190/0, in_queue=3190, util=95.54% 00:09:26.876 nvme0n2: ios=12909/0, merge=0/0, ticks=3394/0, in_queue=3394, util=95.32% 00:09:26.876 nvme0n3: ios=11012/0, merge=0/0, ticks=2942/0, in_queue=2942, util=96.33% 00:09:26.876 nvme0n4: ios=9769/0, merge=0/0, ticks=2733/0, in_queue=2733, util=96.59% 00:09:26.876 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:26.876 20:30:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:27.133 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:27.133 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:27.391 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:27.391 20:30:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:27.958 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:27.958 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:28.215 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:28.215 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:28.473 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:28.473 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66567 00:09:28.473 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:28.473 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:28.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.731 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:28.731 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:28.731 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:28.731 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.731 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.731 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:28.731 nvmf hotplug test: fio failed as expected 00:09:28.731 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:28.731 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:28.731 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:28.731 20:30:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.989 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:28.989 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:28.989 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:28.989 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:28.989 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:28.989 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:28.989 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:28.989 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:28.989 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:28.989 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:28.989 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:28.989 rmmod nvme_tcp 00:09:28.989 rmmod nvme_fabrics 00:09:28.989 rmmod nvme_keyring 00:09:28.989 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:28.989 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:28.989 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:28.989 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66188 ']' 00:09:28.989 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66188 00:09:28.989 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66188 ']' 00:09:28.989 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66188 00:09:28.989 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:28.989 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.989 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66188 00:09:29.247 killing process with pid 66188 00:09:29.247 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.247 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.247 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66188' 00:09:29.247 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66188 00:09:29.247 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66188 00:09:29.247 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:29.247 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:29.247 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:29.247 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:29.247 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:29.247 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:29.247 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:29.248 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:29.248 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:29.248 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:29.248 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:29.506 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:29.506 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:29.506 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:29.506 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:29.506 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:29.506 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:29.506 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:29.506 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:29.506 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:29.506 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:29.506 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:29.506 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:29.506 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.506 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.506 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.506 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:09:29.506 00:09:29.506 real 0m20.272s 00:09:29.506 user 1m16.483s 00:09:29.506 sys 0m10.141s 00:09:29.506 ************************************ 00:09:29.506 END TEST nvmf_fio_target 00:09:29.506 ************************************ 00:09:29.506 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.506 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:29.766 20:30:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:29.766 20:30:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:29.766 20:30:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.766 20:30:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:29.766 ************************************ 00:09:29.766 START TEST nvmf_bdevio 00:09:29.766 ************************************ 00:09:29.766 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:29.766 * Looking for test storage... 00:09:29.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:29.766 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:29.766 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:29.766 20:30:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:29.766 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:29.766 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.766 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.766 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.766 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.766 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.766 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.766 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.766 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.766 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.766 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.766 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.766 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:29.766 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:29.766 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.766 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.766 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:29.766 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:29.766 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.766 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:29.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.767 --rc genhtml_branch_coverage=1 00:09:29.767 --rc genhtml_function_coverage=1 00:09:29.767 --rc genhtml_legend=1 00:09:29.767 --rc geninfo_all_blocks=1 00:09:29.767 --rc geninfo_unexecuted_blocks=1 00:09:29.767 00:09:29.767 ' 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:29.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.767 --rc genhtml_branch_coverage=1 00:09:29.767 --rc genhtml_function_coverage=1 00:09:29.767 --rc genhtml_legend=1 00:09:29.767 --rc geninfo_all_blocks=1 00:09:29.767 --rc geninfo_unexecuted_blocks=1 00:09:29.767 00:09:29.767 ' 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:29.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.767 --rc genhtml_branch_coverage=1 00:09:29.767 --rc genhtml_function_coverage=1 00:09:29.767 --rc genhtml_legend=1 00:09:29.767 --rc geninfo_all_blocks=1 00:09:29.767 --rc geninfo_unexecuted_blocks=1 00:09:29.767 00:09:29.767 ' 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:29.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.767 --rc genhtml_branch_coverage=1 00:09:29.767 --rc genhtml_function_coverage=1 00:09:29.767 --rc genhtml_legend=1 00:09:29.767 --rc geninfo_all_blocks=1 00:09:29.767 --rc geninfo_unexecuted_blocks=1 00:09:29.767 00:09:29.767 ' 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:29.767 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:29.767 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:29.768 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:29.768 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:29.768 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:29.768 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.768 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:29.768 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:29.768 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:29.768 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:29.768 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:29.768 Cannot find device "nvmf_init_br" 00:09:29.768 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:29.768 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:29.768 Cannot find device "nvmf_init_br2" 00:09:29.768 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:29.768 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:30.025 Cannot find device "nvmf_tgt_br" 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:30.025 Cannot find device "nvmf_tgt_br2" 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:30.025 Cannot find device "nvmf_init_br" 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:30.025 Cannot find device "nvmf_init_br2" 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:30.025 Cannot find device "nvmf_tgt_br" 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:30.025 Cannot find device "nvmf_tgt_br2" 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:30.025 Cannot find device "nvmf_br" 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:30.025 Cannot find device "nvmf_init_if" 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:30.025 Cannot find device "nvmf_init_if2" 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:30.025 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:30.025 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:30.025 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:30.283 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:30.284 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:30.284 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:09:30.284 00:09:30.284 --- 10.0.0.3 ping statistics --- 00:09:30.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.284 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:30.284 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:30.284 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:09:30.284 00:09:30.284 --- 10.0.0.4 ping statistics --- 00:09:30.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.284 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:30.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:09:30.284 00:09:30.284 --- 10.0.0.1 ping statistics --- 00:09:30.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.284 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:30.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:09:30.284 00:09:30.284 --- 10.0.0.2 ping statistics --- 00:09:30.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.284 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66947 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66947 00:09:30.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 66947 ']' 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.284 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:30.284 [2024-11-26 20:30:30.542488] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:09:30.284 [2024-11-26 20:30:30.542747] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.542 [2024-11-26 20:30:30.697252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.542 [2024-11-26 20:30:30.760413] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.542 [2024-11-26 20:30:30.760936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.542 [2024-11-26 20:30:30.761481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.542 [2024-11-26 20:30:30.762081] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.542 [2024-11-26 20:30:30.762343] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.542 [2024-11-26 20:30:30.763904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:30.542 [2024-11-26 20:30:30.764002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:30.542 [2024-11-26 20:30:30.764127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:30.542 [2024-11-26 20:30:30.764134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.542 [2024-11-26 20:30:30.822140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:30.542 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.542 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:30.542 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:30.542 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:30.542 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:30.800 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.800 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:30.800 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.800 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:30.800 [2024-11-26 20:30:30.934699] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.800 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.800 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:30.800 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.800 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:30.800 Malloc0 00:09:30.800 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.800 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:30.800 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.800 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:30.800 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.800 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:30.800 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.800 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:30.800 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.800 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:30.800 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.800 20:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:30.800 [2024-11-26 20:30:30.997480] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:30.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:30.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:30.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:30.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:30.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:30.800 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:30.800 { 00:09:30.800 "params": { 00:09:30.800 "name": "Nvme$subsystem", 00:09:30.800 "trtype": "$TEST_TRANSPORT", 00:09:30.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:30.800 "adrfam": "ipv4", 00:09:30.800 "trsvcid": "$NVMF_PORT", 00:09:30.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:30.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:30.800 "hdgst": ${hdgst:-false}, 00:09:30.801 "ddgst": ${ddgst:-false} 00:09:30.801 }, 00:09:30.801 "method": "bdev_nvme_attach_controller" 00:09:30.801 } 00:09:30.801 EOF 00:09:30.801 )") 00:09:30.801 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:30.801 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:30.801 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:30.801 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:30.801 "params": { 00:09:30.801 "name": "Nvme1", 00:09:30.801 "trtype": "tcp", 00:09:30.801 "traddr": "10.0.0.3", 00:09:30.801 "adrfam": "ipv4", 00:09:30.801 "trsvcid": "4420", 00:09:30.801 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:30.801 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:30.801 "hdgst": false, 00:09:30.801 "ddgst": false 00:09:30.801 }, 00:09:30.801 "method": "bdev_nvme_attach_controller" 00:09:30.801 }' 00:09:30.801 [2024-11-26 20:30:31.057987] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:09:30.801 [2024-11-26 20:30:31.058233] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66971 ] 00:09:31.058 [2024-11-26 20:30:31.207815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:31.058 [2024-11-26 20:30:31.272744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.058 [2024-11-26 20:30:31.272894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.058 [2024-11-26 20:30:31.272899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.058 [2024-11-26 20:30:31.339575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.316 I/O targets: 00:09:31.316 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:31.316 00:09:31.316 00:09:31.316 CUnit - A unit testing framework for C - Version 2.1-3 00:09:31.316 http://cunit.sourceforge.net/ 00:09:31.316 00:09:31.316 00:09:31.316 Suite: bdevio tests on: Nvme1n1 00:09:31.316 Test: blockdev write read block ...passed 00:09:31.316 Test: blockdev write zeroes read block ...passed 00:09:31.316 Test: blockdev write zeroes read no split ...passed 00:09:31.316 Test: blockdev write zeroes read split ...passed 00:09:31.316 Test: blockdev write zeroes read split partial ...passed 00:09:31.316 Test: blockdev reset ...[2024-11-26 20:30:31.500136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:31.316 [2024-11-26 20:30:31.500490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e67190 (9): Bad file descriptor 00:09:31.316 [2024-11-26 20:30:31.513913] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resettinpassed 00:09:31.316 Test: blockdev write read 8 blocks ...g controller successful. 00:09:31.316 passed 00:09:31.316 Test: blockdev write read size > 128k ...passed 00:09:31.316 Test: blockdev write read invalid size ...passed 00:09:31.316 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:31.316 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:31.316 Test: blockdev write read max offset ...passed 00:09:31.316 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:31.316 Test: blockdev writev readv 8 blocks ...passed 00:09:31.316 Test: blockdev writev readv 30 x 1block ...passed 00:09:31.316 Test: blockdev writev readv block ...passed 00:09:31.316 Test: blockdev writev readv size > 128k ...passed 00:09:31.316 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:31.316 Test: blockdev comparev and writev ...[2024-11-26 20:30:31.522087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:31.316 [2024-11-26 20:30:31.522133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:31.316 [2024-11-26 20:30:31.522156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:31.316 [2024-11-26 20:30:31.522167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:31.316 [2024-11-26 20:30:31.522468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:31.316 [2024-11-26 20:30:31.522488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:31.316 [2024-11-26 20:30:31.522505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:31.316 [2024-11-26 20:30:31.522516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:31.316 [2024-11-26 20:30:31.522795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:31.316 [2024-11-26 20:30:31.522813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:31.316 [2024-11-26 20:30:31.522830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:31.317 [2024-11-26 20:30:31.522841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:31.317 [2024-11-26 20:30:31.523116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:31.317 [2024-11-26 20:30:31.523132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:31.317 [2024-11-26 20:30:31.523149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:31.317 [2024-11-26 20:30:31.523159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:31.317 passed 00:09:31.317 Test: blockdev nvme passthru rw ...passed 00:09:31.317 Test: blockdev nvme passthru vendor specific ...[2024-11-26 20:30:31.524236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:31.317 [2024-11-26 20:30:31.524264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:31.317 [2024-11-26 20:30:31.524379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:31.317 [2024-11-26 20:30:31.524396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:31.317 [2024-11-26 20:30:31.524497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:31.317 [2024-11-26 20:30:31.524520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:31.317 [2024-11-26 20:30:31.524618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:31.317 [2024-11-26 20:30:31.524633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:31.317 passed 00:09:31.317 Test: blockdev nvme admin passthru ...passed 00:09:31.317 Test: blockdev copy ...passed 00:09:31.317 00:09:31.317 Run Summary: Type Total Ran Passed Failed Inactive 00:09:31.317 suites 1 1 n/a 0 0 00:09:31.317 tests 23 23 23 0 0 00:09:31.317 asserts 152 152 152 0 n/a 00:09:31.317 00:09:31.317 Elapsed time = 0.152 seconds 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:31.576 rmmod nvme_tcp 00:09:31.576 rmmod nvme_fabrics 00:09:31.576 rmmod nvme_keyring 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66947 ']' 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66947 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 66947 ']' 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 66947 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66947 00:09:31.576 killing process with pid 66947 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66947' 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 66947 00:09:31.576 20:30:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 66947 00:09:31.835 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:31.835 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:31.835 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:31.835 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:31.835 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:31.835 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:31.835 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:31.835 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:31.835 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:31.835 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:31.835 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:31.835 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:31.835 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:31.835 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:31.835 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:31.835 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:31.835 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:31.835 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:32.093 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:32.093 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:32.093 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:32.094 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:32.094 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:32.094 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.094 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.094 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.094 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:09:32.094 00:09:32.094 real 0m2.461s 00:09:32.094 user 0m6.578s 00:09:32.094 sys 0m0.831s 00:09:32.094 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.094 20:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:32.094 ************************************ 00:09:32.094 END TEST nvmf_bdevio 00:09:32.094 ************************************ 00:09:32.094 20:30:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:32.094 00:09:32.094 real 2m35.568s 00:09:32.094 user 6m49.268s 00:09:32.094 sys 0m52.340s 00:09:32.094 20:30:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.094 20:30:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:32.094 ************************************ 00:09:32.094 END TEST nvmf_target_core 00:09:32.094 ************************************ 00:09:32.094 20:30:32 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:32.094 20:30:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:32.094 20:30:32 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.094 20:30:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:32.094 ************************************ 00:09:32.094 START TEST nvmf_target_extra 00:09:32.094 ************************************ 00:09:32.094 20:30:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:32.353 * Looking for test storage... 00:09:32.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:32.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.353 --rc genhtml_branch_coverage=1 00:09:32.353 --rc genhtml_function_coverage=1 00:09:32.353 --rc genhtml_legend=1 00:09:32.353 --rc geninfo_all_blocks=1 00:09:32.353 --rc geninfo_unexecuted_blocks=1 00:09:32.353 00:09:32.353 ' 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:32.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.353 --rc genhtml_branch_coverage=1 00:09:32.353 --rc genhtml_function_coverage=1 00:09:32.353 --rc genhtml_legend=1 00:09:32.353 --rc geninfo_all_blocks=1 00:09:32.353 --rc geninfo_unexecuted_blocks=1 00:09:32.353 00:09:32.353 ' 00:09:32.353 20:30:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:32.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.354 --rc genhtml_branch_coverage=1 00:09:32.354 --rc genhtml_function_coverage=1 00:09:32.354 --rc genhtml_legend=1 00:09:32.354 --rc geninfo_all_blocks=1 00:09:32.354 --rc geninfo_unexecuted_blocks=1 00:09:32.354 00:09:32.354 ' 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:32.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.354 --rc genhtml_branch_coverage=1 00:09:32.354 --rc genhtml_function_coverage=1 00:09:32.354 --rc genhtml_legend=1 00:09:32.354 --rc geninfo_all_blocks=1 00:09:32.354 --rc geninfo_unexecuted_blocks=1 00:09:32.354 00:09:32.354 ' 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:32.354 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:32.354 ************************************ 00:09:32.354 START TEST nvmf_auth_target 00:09:32.354 ************************************ 00:09:32.354 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:32.614 * Looking for test storage... 00:09:32.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:32.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.614 --rc genhtml_branch_coverage=1 00:09:32.614 --rc genhtml_function_coverage=1 00:09:32.614 --rc genhtml_legend=1 00:09:32.614 --rc geninfo_all_blocks=1 00:09:32.614 --rc geninfo_unexecuted_blocks=1 00:09:32.614 00:09:32.614 ' 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:32.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.614 --rc genhtml_branch_coverage=1 00:09:32.614 --rc genhtml_function_coverage=1 00:09:32.614 --rc genhtml_legend=1 00:09:32.614 --rc geninfo_all_blocks=1 00:09:32.614 --rc geninfo_unexecuted_blocks=1 00:09:32.614 00:09:32.614 ' 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:32.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.614 --rc genhtml_branch_coverage=1 00:09:32.614 --rc genhtml_function_coverage=1 00:09:32.614 --rc genhtml_legend=1 00:09:32.614 --rc geninfo_all_blocks=1 00:09:32.614 --rc geninfo_unexecuted_blocks=1 00:09:32.614 00:09:32.614 ' 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:32.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.614 --rc genhtml_branch_coverage=1 00:09:32.614 --rc genhtml_function_coverage=1 00:09:32.614 --rc genhtml_legend=1 00:09:32.614 --rc geninfo_all_blocks=1 00:09:32.614 --rc geninfo_unexecuted_blocks=1 00:09:32.614 00:09:32.614 ' 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.614 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:32.615 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:32.615 Cannot find device "nvmf_init_br" 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:32.615 Cannot find device "nvmf_init_br2" 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:32.615 Cannot find device "nvmf_tgt_br" 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:32.615 Cannot find device "nvmf_tgt_br2" 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:32.615 Cannot find device "nvmf_init_br" 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:32.615 Cannot find device "nvmf_init_br2" 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:32.615 Cannot find device "nvmf_tgt_br" 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:09:32.615 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:32.874 Cannot find device "nvmf_tgt_br2" 00:09:32.874 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:09:32.874 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:32.874 Cannot find device "nvmf_br" 00:09:32.874 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:09:32.874 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:32.874 Cannot find device "nvmf_init_if" 00:09:32.874 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:09:32.874 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:32.874 Cannot find device "nvmf_init_if2" 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:32.874 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:32.874 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:32.874 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:33.133 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:33.133 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:33.133 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:33.134 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:33.134 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:09:33.134 00:09:33.134 --- 10.0.0.3 ping statistics --- 00:09:33.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.134 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:33.134 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:33.134 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:09:33.134 00:09:33.134 --- 10.0.0.4 ping statistics --- 00:09:33.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.134 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:33.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:33.134 00:09:33.134 --- 10.0.0.1 ping statistics --- 00:09:33.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.134 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:33.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:09:33.134 00:09:33.134 --- 10.0.0.2 ping statistics --- 00:09:33.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.134 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67255 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67255 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67255 ']' 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.134 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67285 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2b1afb07617dcc70a9602c622bb422f7b2d6ee9a1c6b6eab 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.RH8 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2b1afb07617dcc70a9602c622bb422f7b2d6ee9a1c6b6eab 0 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2b1afb07617dcc70a9602c622bb422f7b2d6ee9a1c6b6eab 0 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2b1afb07617dcc70a9602c622bb422f7b2d6ee9a1c6b6eab 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.RH8 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.RH8 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.RH8 00:09:33.702 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1596fc41700de1abed4ed5162883e220297fd005948e2fead3d3ae7be2a12ad9 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.yRw 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1596fc41700de1abed4ed5162883e220297fd005948e2fead3d3ae7be2a12ad9 3 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1596fc41700de1abed4ed5162883e220297fd005948e2fead3d3ae7be2a12ad9 3 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1596fc41700de1abed4ed5162883e220297fd005948e2fead3d3ae7be2a12ad9 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.yRw 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.yRw 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.yRw 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=889f36bf04f8d4853797d5528de32bc0 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.C7z 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 889f36bf04f8d4853797d5528de32bc0 1 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 889f36bf04f8d4853797d5528de32bc0 1 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=889f36bf04f8d4853797d5528de32bc0 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:09:33.703 20:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:33.703 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.C7z 00:09:33.703 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.C7z 00:09:33.703 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.C7z 00:09:33.703 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:09:33.703 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:33.703 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:33.703 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:33.703 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:09:33.703 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:33.703 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:33.703 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e7c6c3f92ae6f03f2b3f1e2dcf9b446ea0d6590c68357cb3 00:09:33.703 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:09:33.703 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.mpE 00:09:33.703 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e7c6c3f92ae6f03f2b3f1e2dcf9b446ea0d6590c68357cb3 2 00:09:33.703 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e7c6c3f92ae6f03f2b3f1e2dcf9b446ea0d6590c68357cb3 2 00:09:33.703 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:33.703 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:33.703 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e7c6c3f92ae6f03f2b3f1e2dcf9b446ea0d6590c68357cb3 00:09:33.703 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:09:33.703 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:33.963 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.mpE 00:09:33.963 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.mpE 00:09:33.963 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.mpE 00:09:33.963 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:09:33.963 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:33.963 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:33.963 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:33.963 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:09:33.963 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:33.963 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:33.963 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ce4a9d0be0fca31fa45f91dab0d51aece32ecce964c04c6a 00:09:33.963 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:09:33.963 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.BvD 00:09:33.963 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ce4a9d0be0fca31fa45f91dab0d51aece32ecce964c04c6a 2 00:09:33.963 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ce4a9d0be0fca31fa45f91dab0d51aece32ecce964c04c6a 2 00:09:33.963 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:33.963 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:33.963 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ce4a9d0be0fca31fa45f91dab0d51aece32ecce964c04c6a 00:09:33.963 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:09:33.963 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:33.963 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.BvD 00:09:33.963 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.BvD 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.BvD 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ccaa95129734d7317e27fe4cdba31a22 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.A29 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ccaa95129734d7317e27fe4cdba31a22 1 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ccaa95129734d7317e27fe4cdba31a22 1 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ccaa95129734d7317e27fe4cdba31a22 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.A29 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.A29 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.A29 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=dd4fcadcc7e5d34984fea7e42c7e57395d88655b5d874f6105c2b40fded053a0 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.4s4 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key dd4fcadcc7e5d34984fea7e42c7e57395d88655b5d874f6105c2b40fded053a0 3 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 dd4fcadcc7e5d34984fea7e42c7e57395d88655b5d874f6105c2b40fded053a0 3 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=dd4fcadcc7e5d34984fea7e42c7e57395d88655b5d874f6105c2b40fded053a0 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.4s4 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.4s4 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.4s4 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67255 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67255 ']' 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.964 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.532 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.532 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:09:34.532 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67285 /var/tmp/host.sock 00:09:34.532 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67285 ']' 00:09:34.532 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:09:34.532 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:34.532 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:34.532 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.532 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.792 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.792 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:09:34.792 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:09:34.792 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.792 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.792 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.792 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:34.792 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.RH8 00:09:34.792 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.792 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.792 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.792 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.RH8 00:09:34.792 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.RH8 00:09:35.051 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.yRw ]] 00:09:35.051 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yRw 00:09:35.051 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.051 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.051 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.051 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yRw 00:09:35.051 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yRw 00:09:35.310 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:35.310 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.C7z 00:09:35.310 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.310 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.569 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.569 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.C7z 00:09:35.569 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.C7z 00:09:35.828 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.mpE ]] 00:09:35.828 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mpE 00:09:35.828 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.828 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.828 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.828 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mpE 00:09:35.829 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mpE 00:09:36.088 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:36.088 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.BvD 00:09:36.088 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.088 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.088 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.088 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.BvD 00:09:36.088 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.BvD 00:09:36.347 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.A29 ]] 00:09:36.347 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.A29 00:09:36.347 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.347 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.347 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.347 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.A29 00:09:36.347 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.A29 00:09:36.606 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:36.606 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.4s4 00:09:36.606 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.606 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.606 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.606 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.4s4 00:09:36.606 20:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.4s4 00:09:36.865 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:09:36.865 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:09:36.865 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:36.865 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:36.865 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:36.865 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:37.125 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:09:37.125 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:37.125 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:37.125 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:37.125 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:37.125 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:37.125 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:37.125 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.125 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.125 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.125 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:37.125 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:37.125 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:37.382 00:09:37.641 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:37.641 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:37.641 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:37.641 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:37.641 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:37.900 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.900 20:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.900 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.900 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:37.900 { 00:09:37.900 "cntlid": 1, 00:09:37.900 "qid": 0, 00:09:37.900 "state": "enabled", 00:09:37.900 "thread": "nvmf_tgt_poll_group_000", 00:09:37.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:09:37.900 "listen_address": { 00:09:37.900 "trtype": "TCP", 00:09:37.900 "adrfam": "IPv4", 00:09:37.900 "traddr": "10.0.0.3", 00:09:37.900 "trsvcid": "4420" 00:09:37.900 }, 00:09:37.900 "peer_address": { 00:09:37.900 "trtype": "TCP", 00:09:37.900 "adrfam": "IPv4", 00:09:37.900 "traddr": "10.0.0.1", 00:09:37.900 "trsvcid": "57710" 00:09:37.900 }, 00:09:37.900 "auth": { 00:09:37.900 "state": "completed", 00:09:37.900 "digest": "sha256", 00:09:37.900 "dhgroup": "null" 00:09:37.900 } 00:09:37.900 } 00:09:37.900 ]' 00:09:37.900 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:37.900 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:37.900 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:37.900 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:37.900 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:37.900 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:37.900 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:37.900 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:38.159 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:09:38.159 20:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:09:43.500 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:43.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:43.500 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:09:43.500 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.500 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.500 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.500 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:43.500 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:43.500 20:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:43.500 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:09:43.500 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:43.500 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:43.500 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:43.500 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:43.500 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:43.500 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:43.500 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.500 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.500 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.500 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:43.500 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:43.500 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:43.500 00:09:43.500 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:43.500 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:43.500 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:44.068 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:44.068 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:44.068 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.068 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.068 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.068 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:44.068 { 00:09:44.068 "cntlid": 3, 00:09:44.068 "qid": 0, 00:09:44.068 "state": "enabled", 00:09:44.068 "thread": "nvmf_tgt_poll_group_000", 00:09:44.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:09:44.068 "listen_address": { 00:09:44.068 "trtype": "TCP", 00:09:44.068 "adrfam": "IPv4", 00:09:44.068 "traddr": "10.0.0.3", 00:09:44.068 "trsvcid": "4420" 00:09:44.068 }, 00:09:44.068 "peer_address": { 00:09:44.068 "trtype": "TCP", 00:09:44.068 "adrfam": "IPv4", 00:09:44.068 "traddr": "10.0.0.1", 00:09:44.068 "trsvcid": "37572" 00:09:44.068 }, 00:09:44.068 "auth": { 00:09:44.068 "state": "completed", 00:09:44.068 "digest": "sha256", 00:09:44.068 "dhgroup": "null" 00:09:44.068 } 00:09:44.068 } 00:09:44.068 ]' 00:09:44.068 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:44.068 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:44.068 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:44.068 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:44.068 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:44.068 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:44.068 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:44.068 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:44.327 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:09:44.327 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:09:45.264 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:45.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:45.264 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:09:45.264 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.264 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.264 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.264 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:45.264 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:45.264 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:45.523 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:09:45.523 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:45.523 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:45.523 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:45.523 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:45.523 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:45.523 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:45.523 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.523 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.523 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.523 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:45.523 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:45.523 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:46.092 00:09:46.092 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:46.092 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:46.092 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:46.350 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:46.350 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:46.350 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.350 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.350 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.350 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:46.350 { 00:09:46.350 "cntlid": 5, 00:09:46.350 "qid": 0, 00:09:46.350 "state": "enabled", 00:09:46.350 "thread": "nvmf_tgt_poll_group_000", 00:09:46.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:09:46.350 "listen_address": { 00:09:46.350 "trtype": "TCP", 00:09:46.350 "adrfam": "IPv4", 00:09:46.350 "traddr": "10.0.0.3", 00:09:46.350 "trsvcid": "4420" 00:09:46.350 }, 00:09:46.350 "peer_address": { 00:09:46.350 "trtype": "TCP", 00:09:46.350 "adrfam": "IPv4", 00:09:46.350 "traddr": "10.0.0.1", 00:09:46.350 "trsvcid": "37608" 00:09:46.350 }, 00:09:46.350 "auth": { 00:09:46.350 "state": "completed", 00:09:46.350 "digest": "sha256", 00:09:46.350 "dhgroup": "null" 00:09:46.350 } 00:09:46.350 } 00:09:46.350 ]' 00:09:46.350 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:46.350 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:46.350 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:46.350 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:46.350 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:46.350 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:46.350 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:46.350 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:46.919 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:09:46.919 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:09:47.487 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:47.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:47.487 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:09:47.487 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.487 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.487 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.487 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:47.487 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:47.487 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:47.747 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:09:47.747 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:47.747 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:47.747 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:47.747 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:47.747 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:47.747 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key3 00:09:47.747 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.747 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.747 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.747 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:47.747 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:47.747 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:48.314 00:09:48.314 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:48.314 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:48.314 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:48.572 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:48.572 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:48.572 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.572 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.573 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.573 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:48.573 { 00:09:48.573 "cntlid": 7, 00:09:48.573 "qid": 0, 00:09:48.573 "state": "enabled", 00:09:48.573 "thread": "nvmf_tgt_poll_group_000", 00:09:48.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:09:48.573 "listen_address": { 00:09:48.573 "trtype": "TCP", 00:09:48.573 "adrfam": "IPv4", 00:09:48.573 "traddr": "10.0.0.3", 00:09:48.573 "trsvcid": "4420" 00:09:48.573 }, 00:09:48.573 "peer_address": { 00:09:48.573 "trtype": "TCP", 00:09:48.573 "adrfam": "IPv4", 00:09:48.573 "traddr": "10.0.0.1", 00:09:48.573 "trsvcid": "57574" 00:09:48.573 }, 00:09:48.573 "auth": { 00:09:48.573 "state": "completed", 00:09:48.573 "digest": "sha256", 00:09:48.573 "dhgroup": "null" 00:09:48.573 } 00:09:48.573 } 00:09:48.573 ]' 00:09:48.573 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:48.573 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:48.573 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:48.573 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:48.573 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:48.573 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:48.573 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:48.573 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:48.832 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:09:48.832 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:09:49.779 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:49.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:49.779 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:09:49.779 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.779 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.779 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.779 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:49.779 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:49.779 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:49.779 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:50.039 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:09:50.039 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:50.039 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:50.039 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:50.039 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:50.039 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:50.039 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:50.039 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.039 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.039 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.039 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:50.039 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:50.039 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:50.298 00:09:50.298 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:50.298 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:50.298 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:50.557 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:50.557 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:50.557 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.557 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.816 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.816 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:50.816 { 00:09:50.816 "cntlid": 9, 00:09:50.816 "qid": 0, 00:09:50.816 "state": "enabled", 00:09:50.816 "thread": "nvmf_tgt_poll_group_000", 00:09:50.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:09:50.816 "listen_address": { 00:09:50.816 "trtype": "TCP", 00:09:50.816 "adrfam": "IPv4", 00:09:50.816 "traddr": "10.0.0.3", 00:09:50.816 "trsvcid": "4420" 00:09:50.816 }, 00:09:50.816 "peer_address": { 00:09:50.816 "trtype": "TCP", 00:09:50.816 "adrfam": "IPv4", 00:09:50.816 "traddr": "10.0.0.1", 00:09:50.816 "trsvcid": "57616" 00:09:50.816 }, 00:09:50.816 "auth": { 00:09:50.816 "state": "completed", 00:09:50.816 "digest": "sha256", 00:09:50.816 "dhgroup": "ffdhe2048" 00:09:50.816 } 00:09:50.816 } 00:09:50.816 ]' 00:09:50.816 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:50.816 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:50.816 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:50.816 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:50.816 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:50.816 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:50.816 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:50.816 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:51.074 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:09:51.075 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:09:52.011 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:52.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:52.011 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:09:52.011 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.011 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.011 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.011 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:52.011 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:52.011 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:52.270 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:09:52.270 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:52.270 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:52.270 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:52.270 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:52.270 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:52.270 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:52.270 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.270 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.270 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.270 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:52.271 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:52.271 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:52.529 00:09:52.529 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:52.529 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:52.529 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:52.788 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:52.788 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:52.788 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.788 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.788 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.788 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:52.788 { 00:09:52.788 "cntlid": 11, 00:09:52.788 "qid": 0, 00:09:52.788 "state": "enabled", 00:09:52.788 "thread": "nvmf_tgt_poll_group_000", 00:09:52.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:09:52.788 "listen_address": { 00:09:52.788 "trtype": "TCP", 00:09:52.788 "adrfam": "IPv4", 00:09:52.788 "traddr": "10.0.0.3", 00:09:52.788 "trsvcid": "4420" 00:09:52.788 }, 00:09:52.788 "peer_address": { 00:09:52.788 "trtype": "TCP", 00:09:52.788 "adrfam": "IPv4", 00:09:52.788 "traddr": "10.0.0.1", 00:09:52.788 "trsvcid": "57638" 00:09:52.788 }, 00:09:52.788 "auth": { 00:09:52.788 "state": "completed", 00:09:52.788 "digest": "sha256", 00:09:52.788 "dhgroup": "ffdhe2048" 00:09:52.788 } 00:09:52.788 } 00:09:52.788 ]' 00:09:52.788 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:53.046 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:53.046 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:53.046 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:53.046 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:53.046 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:53.046 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:53.046 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:53.304 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:09:53.304 20:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:09:53.870 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:53.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:53.871 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:09:53.871 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.871 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.871 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.871 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:53.871 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:53.871 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:54.437 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:09:54.437 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:54.437 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:54.437 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:54.437 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:54.437 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:54.437 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:54.437 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.437 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.437 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.437 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:54.437 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:54.437 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:54.696 00:09:54.696 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:54.696 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:54.696 20:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:54.954 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:54.954 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:54.954 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.954 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.954 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.954 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:54.954 { 00:09:54.954 "cntlid": 13, 00:09:54.954 "qid": 0, 00:09:54.954 "state": "enabled", 00:09:54.954 "thread": "nvmf_tgt_poll_group_000", 00:09:54.954 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:09:54.954 "listen_address": { 00:09:54.954 "trtype": "TCP", 00:09:54.954 "adrfam": "IPv4", 00:09:54.954 "traddr": "10.0.0.3", 00:09:54.954 "trsvcid": "4420" 00:09:54.954 }, 00:09:54.954 "peer_address": { 00:09:54.954 "trtype": "TCP", 00:09:54.954 "adrfam": "IPv4", 00:09:54.954 "traddr": "10.0.0.1", 00:09:54.954 "trsvcid": "57678" 00:09:54.955 }, 00:09:54.955 "auth": { 00:09:54.955 "state": "completed", 00:09:54.955 "digest": "sha256", 00:09:54.955 "dhgroup": "ffdhe2048" 00:09:54.955 } 00:09:54.955 } 00:09:54.955 ]' 00:09:54.955 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:55.214 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:55.214 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:55.214 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:55.214 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:55.214 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:55.214 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:55.214 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:55.781 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:09:55.781 20:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:09:56.347 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:56.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:56.347 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:09:56.347 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.347 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.347 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.347 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:56.347 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:56.347 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:56.606 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:09:56.606 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:56.606 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:56.606 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:56.606 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:56.606 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:56.606 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key3 00:09:56.607 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.607 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.607 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.607 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:56.607 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:56.607 20:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:56.865 00:09:57.125 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:57.125 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:57.125 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:57.389 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:57.389 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:57.389 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.389 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.389 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.389 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:57.389 { 00:09:57.389 "cntlid": 15, 00:09:57.389 "qid": 0, 00:09:57.390 "state": "enabled", 00:09:57.390 "thread": "nvmf_tgt_poll_group_000", 00:09:57.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:09:57.390 "listen_address": { 00:09:57.390 "trtype": "TCP", 00:09:57.390 "adrfam": "IPv4", 00:09:57.390 "traddr": "10.0.0.3", 00:09:57.390 "trsvcid": "4420" 00:09:57.390 }, 00:09:57.390 "peer_address": { 00:09:57.390 "trtype": "TCP", 00:09:57.390 "adrfam": "IPv4", 00:09:57.390 "traddr": "10.0.0.1", 00:09:57.390 "trsvcid": "57720" 00:09:57.390 }, 00:09:57.390 "auth": { 00:09:57.390 "state": "completed", 00:09:57.390 "digest": "sha256", 00:09:57.390 "dhgroup": "ffdhe2048" 00:09:57.390 } 00:09:57.390 } 00:09:57.390 ]' 00:09:57.390 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:57.390 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:57.390 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:57.390 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:57.390 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:57.390 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:57.390 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:57.390 20:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:57.957 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:09:57.957 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:09:58.522 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:58.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:58.522 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:09:58.522 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.522 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.522 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.522 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:58.522 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:58.522 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:58.522 20:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:58.780 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:09:58.780 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:58.780 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:58.780 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:58.780 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:58.780 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:58.780 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:58.780 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.780 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.780 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.780 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:58.780 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:58.780 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:59.345 00:09:59.345 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:59.345 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:59.345 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:59.604 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:59.604 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:59.604 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.604 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.604 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.604 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:59.604 { 00:09:59.604 "cntlid": 17, 00:09:59.604 "qid": 0, 00:09:59.604 "state": "enabled", 00:09:59.604 "thread": "nvmf_tgt_poll_group_000", 00:09:59.604 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:09:59.604 "listen_address": { 00:09:59.604 "trtype": "TCP", 00:09:59.604 "adrfam": "IPv4", 00:09:59.604 "traddr": "10.0.0.3", 00:09:59.604 "trsvcid": "4420" 00:09:59.604 }, 00:09:59.604 "peer_address": { 00:09:59.604 "trtype": "TCP", 00:09:59.604 "adrfam": "IPv4", 00:09:59.604 "traddr": "10.0.0.1", 00:09:59.604 "trsvcid": "37454" 00:09:59.604 }, 00:09:59.604 "auth": { 00:09:59.604 "state": "completed", 00:09:59.604 "digest": "sha256", 00:09:59.604 "dhgroup": "ffdhe3072" 00:09:59.604 } 00:09:59.604 } 00:09:59.604 ]' 00:09:59.604 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:59.604 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:59.604 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:59.604 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:59.604 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:59.604 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:59.604 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:59.604 20:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:00.169 20:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:10:00.169 20:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:10:00.734 20:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:00.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:00.734 20:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:00.734 20:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.734 20:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.734 20:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.734 20:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:00.734 20:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:00.734 20:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:00.991 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:10:00.991 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:00.991 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:00.991 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:00.991 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:00.991 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:00.991 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:00.991 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.991 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.991 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.991 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:00.991 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:00.991 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:01.558 00:10:01.558 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:01.558 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:01.558 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:01.818 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:01.818 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:01.818 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.818 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.818 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.818 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:01.818 { 00:10:01.818 "cntlid": 19, 00:10:01.818 "qid": 0, 00:10:01.818 "state": "enabled", 00:10:01.818 "thread": "nvmf_tgt_poll_group_000", 00:10:01.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:01.818 "listen_address": { 00:10:01.818 "trtype": "TCP", 00:10:01.818 "adrfam": "IPv4", 00:10:01.818 "traddr": "10.0.0.3", 00:10:01.818 "trsvcid": "4420" 00:10:01.818 }, 00:10:01.818 "peer_address": { 00:10:01.818 "trtype": "TCP", 00:10:01.818 "adrfam": "IPv4", 00:10:01.818 "traddr": "10.0.0.1", 00:10:01.818 "trsvcid": "37496" 00:10:01.818 }, 00:10:01.818 "auth": { 00:10:01.818 "state": "completed", 00:10:01.818 "digest": "sha256", 00:10:01.818 "dhgroup": "ffdhe3072" 00:10:01.818 } 00:10:01.818 } 00:10:01.818 ]' 00:10:01.818 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:01.818 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:01.818 20:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:01.818 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:01.818 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:01.818 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:01.818 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:01.818 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:02.077 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:10:02.077 20:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:10:03.014 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:03.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:03.014 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:03.014 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.014 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.014 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.014 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:03.014 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:03.014 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:03.274 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:10:03.274 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:03.274 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:03.274 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:03.274 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:03.274 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:03.274 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:03.274 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.274 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.274 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.274 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:03.274 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:03.274 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:03.844 00:10:03.844 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:03.844 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:03.844 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:04.103 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:04.103 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:04.103 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.103 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.104 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.104 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:04.104 { 00:10:04.104 "cntlid": 21, 00:10:04.104 "qid": 0, 00:10:04.104 "state": "enabled", 00:10:04.104 "thread": "nvmf_tgt_poll_group_000", 00:10:04.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:04.104 "listen_address": { 00:10:04.104 "trtype": "TCP", 00:10:04.104 "adrfam": "IPv4", 00:10:04.104 "traddr": "10.0.0.3", 00:10:04.104 "trsvcid": "4420" 00:10:04.104 }, 00:10:04.104 "peer_address": { 00:10:04.104 "trtype": "TCP", 00:10:04.104 "adrfam": "IPv4", 00:10:04.104 "traddr": "10.0.0.1", 00:10:04.104 "trsvcid": "37528" 00:10:04.104 }, 00:10:04.104 "auth": { 00:10:04.104 "state": "completed", 00:10:04.104 "digest": "sha256", 00:10:04.104 "dhgroup": "ffdhe3072" 00:10:04.104 } 00:10:04.104 } 00:10:04.104 ]' 00:10:04.104 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:04.104 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:04.104 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:04.104 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:04.104 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:04.104 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:04.104 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:04.104 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:04.363 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:10:04.363 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:10:05.300 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:05.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:05.300 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:05.300 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.300 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.300 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.300 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:05.300 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:05.300 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:05.300 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:10:05.300 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:05.300 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:05.300 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:05.300 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:05.300 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:05.300 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key3 00:10:05.300 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.300 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.300 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.300 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:05.300 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:05.300 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:05.866 00:10:05.866 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:05.866 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:05.866 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:06.124 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:06.124 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:06.124 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.124 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.124 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.124 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:06.124 { 00:10:06.124 "cntlid": 23, 00:10:06.124 "qid": 0, 00:10:06.124 "state": "enabled", 00:10:06.124 "thread": "nvmf_tgt_poll_group_000", 00:10:06.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:06.124 "listen_address": { 00:10:06.124 "trtype": "TCP", 00:10:06.124 "adrfam": "IPv4", 00:10:06.124 "traddr": "10.0.0.3", 00:10:06.124 "trsvcid": "4420" 00:10:06.124 }, 00:10:06.124 "peer_address": { 00:10:06.124 "trtype": "TCP", 00:10:06.124 "adrfam": "IPv4", 00:10:06.124 "traddr": "10.0.0.1", 00:10:06.124 "trsvcid": "37550" 00:10:06.124 }, 00:10:06.124 "auth": { 00:10:06.124 "state": "completed", 00:10:06.124 "digest": "sha256", 00:10:06.124 "dhgroup": "ffdhe3072" 00:10:06.124 } 00:10:06.124 } 00:10:06.124 ]' 00:10:06.124 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:06.124 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:06.124 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:06.124 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:06.124 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:06.383 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:06.383 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:06.383 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:06.641 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:10:06.641 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:10:07.207 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:07.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:07.207 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:07.207 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.207 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.207 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.207 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:07.207 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:07.207 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:07.207 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:07.772 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:10:07.772 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:07.772 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:07.772 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:07.772 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:07.772 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:07.772 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:07.772 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.772 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.772 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.772 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:07.772 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:07.772 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:08.029 00:10:08.029 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:08.029 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:08.029 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:08.287 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:08.287 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:08.287 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.287 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.287 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.287 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:08.287 { 00:10:08.287 "cntlid": 25, 00:10:08.287 "qid": 0, 00:10:08.287 "state": "enabled", 00:10:08.287 "thread": "nvmf_tgt_poll_group_000", 00:10:08.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:08.287 "listen_address": { 00:10:08.287 "trtype": "TCP", 00:10:08.287 "adrfam": "IPv4", 00:10:08.287 "traddr": "10.0.0.3", 00:10:08.287 "trsvcid": "4420" 00:10:08.287 }, 00:10:08.287 "peer_address": { 00:10:08.287 "trtype": "TCP", 00:10:08.287 "adrfam": "IPv4", 00:10:08.287 "traddr": "10.0.0.1", 00:10:08.287 "trsvcid": "37582" 00:10:08.287 }, 00:10:08.287 "auth": { 00:10:08.287 "state": "completed", 00:10:08.287 "digest": "sha256", 00:10:08.287 "dhgroup": "ffdhe4096" 00:10:08.287 } 00:10:08.287 } 00:10:08.287 ]' 00:10:08.287 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:08.599 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:08.599 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:08.599 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:08.599 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:08.599 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:08.599 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:08.599 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:08.883 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:10:08.883 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:10:09.815 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:09.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:09.815 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:09.815 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.815 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.815 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.815 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:09.815 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:09.816 20:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:10.075 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:10:10.075 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:10.075 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:10.075 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:10.075 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:10.075 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:10.075 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:10.075 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.075 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.075 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.075 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:10.075 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:10.075 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:10.642 00:10:10.642 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:10.642 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:10.642 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:10.902 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:10.902 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:10.902 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.902 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.902 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.902 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:10.902 { 00:10:10.902 "cntlid": 27, 00:10:10.902 "qid": 0, 00:10:10.902 "state": "enabled", 00:10:10.902 "thread": "nvmf_tgt_poll_group_000", 00:10:10.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:10.902 "listen_address": { 00:10:10.902 "trtype": "TCP", 00:10:10.902 "adrfam": "IPv4", 00:10:10.902 "traddr": "10.0.0.3", 00:10:10.902 "trsvcid": "4420" 00:10:10.902 }, 00:10:10.902 "peer_address": { 00:10:10.902 "trtype": "TCP", 00:10:10.902 "adrfam": "IPv4", 00:10:10.902 "traddr": "10.0.0.1", 00:10:10.902 "trsvcid": "58430" 00:10:10.902 }, 00:10:10.902 "auth": { 00:10:10.902 "state": "completed", 00:10:10.902 "digest": "sha256", 00:10:10.902 "dhgroup": "ffdhe4096" 00:10:10.902 } 00:10:10.902 } 00:10:10.902 ]' 00:10:10.902 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:10.902 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:10.902 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:10.902 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:10.902 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:10.902 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:10.902 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:10.902 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:11.470 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:10:11.470 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:10:12.038 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:12.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:12.038 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:12.038 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.038 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.038 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.038 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:12.038 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:12.038 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:12.297 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:10:12.297 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:12.297 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:12.297 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:12.297 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:12.297 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:12.298 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:12.298 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.298 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.298 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.298 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:12.298 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:12.298 20:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:12.866 00:10:12.866 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:12.866 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:12.866 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:13.125 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:13.125 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:13.125 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.125 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.125 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.125 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:13.125 { 00:10:13.126 "cntlid": 29, 00:10:13.126 "qid": 0, 00:10:13.126 "state": "enabled", 00:10:13.126 "thread": "nvmf_tgt_poll_group_000", 00:10:13.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:13.126 "listen_address": { 00:10:13.126 "trtype": "TCP", 00:10:13.126 "adrfam": "IPv4", 00:10:13.126 "traddr": "10.0.0.3", 00:10:13.126 "trsvcid": "4420" 00:10:13.126 }, 00:10:13.126 "peer_address": { 00:10:13.126 "trtype": "TCP", 00:10:13.126 "adrfam": "IPv4", 00:10:13.126 "traddr": "10.0.0.1", 00:10:13.126 "trsvcid": "58458" 00:10:13.126 }, 00:10:13.126 "auth": { 00:10:13.126 "state": "completed", 00:10:13.126 "digest": "sha256", 00:10:13.126 "dhgroup": "ffdhe4096" 00:10:13.126 } 00:10:13.126 } 00:10:13.126 ]' 00:10:13.126 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:13.385 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:13.385 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:13.385 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:13.385 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:13.385 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:13.385 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:13.385 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:13.643 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:10:13.644 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:10:14.580 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:14.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:14.580 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:14.580 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.580 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.580 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.580 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:14.580 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:14.580 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:14.842 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:10:14.842 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:14.842 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:14.842 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:14.842 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:14.842 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:14.842 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key3 00:10:14.842 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.842 20:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.842 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.842 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:14.842 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:14.842 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:15.422 00:10:15.422 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:15.422 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:15.422 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:15.681 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:15.681 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:15.681 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.681 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.681 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.681 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:15.681 { 00:10:15.681 "cntlid": 31, 00:10:15.681 "qid": 0, 00:10:15.681 "state": "enabled", 00:10:15.681 "thread": "nvmf_tgt_poll_group_000", 00:10:15.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:15.681 "listen_address": { 00:10:15.681 "trtype": "TCP", 00:10:15.681 "adrfam": "IPv4", 00:10:15.681 "traddr": "10.0.0.3", 00:10:15.681 "trsvcid": "4420" 00:10:15.681 }, 00:10:15.681 "peer_address": { 00:10:15.681 "trtype": "TCP", 00:10:15.681 "adrfam": "IPv4", 00:10:15.681 "traddr": "10.0.0.1", 00:10:15.681 "trsvcid": "58476" 00:10:15.681 }, 00:10:15.681 "auth": { 00:10:15.681 "state": "completed", 00:10:15.681 "digest": "sha256", 00:10:15.681 "dhgroup": "ffdhe4096" 00:10:15.681 } 00:10:15.681 } 00:10:15.681 ]' 00:10:15.681 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:15.681 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:15.681 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:15.681 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:15.681 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:15.681 20:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:15.681 20:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:15.681 20:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:16.336 20:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:10:16.336 20:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:10:16.905 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:16.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:16.905 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:16.905 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.905 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.905 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.905 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:16.905 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:16.905 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:16.905 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:17.164 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:10:17.164 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:17.164 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:17.164 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:17.164 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:17.164 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:17.164 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.164 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.164 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.164 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.164 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.164 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.164 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.731 00:10:17.731 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:17.731 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:17.731 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:17.989 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:17.989 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:17.989 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.989 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.989 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.989 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:17.989 { 00:10:17.989 "cntlid": 33, 00:10:17.989 "qid": 0, 00:10:17.989 "state": "enabled", 00:10:17.989 "thread": "nvmf_tgt_poll_group_000", 00:10:17.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:17.990 "listen_address": { 00:10:17.990 "trtype": "TCP", 00:10:17.990 "adrfam": "IPv4", 00:10:17.990 "traddr": "10.0.0.3", 00:10:17.990 "trsvcid": "4420" 00:10:17.990 }, 00:10:17.990 "peer_address": { 00:10:17.990 "trtype": "TCP", 00:10:17.990 "adrfam": "IPv4", 00:10:17.990 "traddr": "10.0.0.1", 00:10:17.990 "trsvcid": "58522" 00:10:17.990 }, 00:10:17.990 "auth": { 00:10:17.990 "state": "completed", 00:10:17.990 "digest": "sha256", 00:10:17.990 "dhgroup": "ffdhe6144" 00:10:17.990 } 00:10:17.990 } 00:10:17.990 ]' 00:10:17.990 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:17.990 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:17.990 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:18.248 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:18.248 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:18.249 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:18.249 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:18.249 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:18.507 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:10:18.507 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:10:19.075 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:19.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:19.075 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:19.075 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.075 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.075 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.075 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:19.075 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:19.075 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:19.333 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:10:19.333 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:19.333 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:19.333 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:19.333 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:19.333 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:19.333 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:19.333 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.333 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.591 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.591 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:19.591 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:19.591 20:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:19.849 00:10:19.849 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:19.849 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:19.849 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:20.415 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:20.415 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:20.415 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.415 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.415 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.415 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:20.415 { 00:10:20.415 "cntlid": 35, 00:10:20.415 "qid": 0, 00:10:20.415 "state": "enabled", 00:10:20.415 "thread": "nvmf_tgt_poll_group_000", 00:10:20.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:20.415 "listen_address": { 00:10:20.415 "trtype": "TCP", 00:10:20.415 "adrfam": "IPv4", 00:10:20.415 "traddr": "10.0.0.3", 00:10:20.415 "trsvcid": "4420" 00:10:20.415 }, 00:10:20.415 "peer_address": { 00:10:20.415 "trtype": "TCP", 00:10:20.415 "adrfam": "IPv4", 00:10:20.415 "traddr": "10.0.0.1", 00:10:20.415 "trsvcid": "39320" 00:10:20.415 }, 00:10:20.415 "auth": { 00:10:20.415 "state": "completed", 00:10:20.415 "digest": "sha256", 00:10:20.415 "dhgroup": "ffdhe6144" 00:10:20.415 } 00:10:20.415 } 00:10:20.415 ]' 00:10:20.415 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:20.415 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:20.415 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:20.415 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:20.415 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:20.415 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:20.415 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:20.416 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:20.674 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:10:20.674 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:10:21.615 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:21.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:21.615 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:21.615 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.615 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.615 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.615 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:21.615 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:21.615 20:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:21.898 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:10:21.898 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:21.898 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:21.898 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:21.898 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:21.898 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:21.898 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.898 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.898 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.898 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.898 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.898 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.898 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:22.466 00:10:22.466 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:22.466 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:22.466 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:22.724 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:22.724 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:22.724 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.724 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.724 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.724 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:22.724 { 00:10:22.724 "cntlid": 37, 00:10:22.724 "qid": 0, 00:10:22.724 "state": "enabled", 00:10:22.724 "thread": "nvmf_tgt_poll_group_000", 00:10:22.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:22.725 "listen_address": { 00:10:22.725 "trtype": "TCP", 00:10:22.725 "adrfam": "IPv4", 00:10:22.725 "traddr": "10.0.0.3", 00:10:22.725 "trsvcid": "4420" 00:10:22.725 }, 00:10:22.725 "peer_address": { 00:10:22.725 "trtype": "TCP", 00:10:22.725 "adrfam": "IPv4", 00:10:22.725 "traddr": "10.0.0.1", 00:10:22.725 "trsvcid": "39344" 00:10:22.725 }, 00:10:22.725 "auth": { 00:10:22.725 "state": "completed", 00:10:22.725 "digest": "sha256", 00:10:22.725 "dhgroup": "ffdhe6144" 00:10:22.725 } 00:10:22.725 } 00:10:22.725 ]' 00:10:22.725 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:22.725 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:22.725 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:22.725 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:22.725 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:22.725 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:22.725 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:22.725 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:22.983 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:10:22.983 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:10:23.919 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:23.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:23.919 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:23.919 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.919 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.919 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.919 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:23.919 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:23.919 20:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:23.919 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:10:23.919 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:23.919 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:23.919 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:23.919 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:23.919 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:23.919 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key3 00:10:23.919 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.919 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.919 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.919 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:23.919 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:23.919 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:24.485 00:10:24.485 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:24.485 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:24.485 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:24.744 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:24.744 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:24.744 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.744 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.744 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.744 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:24.744 { 00:10:24.744 "cntlid": 39, 00:10:24.744 "qid": 0, 00:10:24.744 "state": "enabled", 00:10:24.744 "thread": "nvmf_tgt_poll_group_000", 00:10:24.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:24.744 "listen_address": { 00:10:24.744 "trtype": "TCP", 00:10:24.744 "adrfam": "IPv4", 00:10:24.744 "traddr": "10.0.0.3", 00:10:24.744 "trsvcid": "4420" 00:10:24.744 }, 00:10:24.744 "peer_address": { 00:10:24.744 "trtype": "TCP", 00:10:24.744 "adrfam": "IPv4", 00:10:24.744 "traddr": "10.0.0.1", 00:10:24.744 "trsvcid": "39376" 00:10:24.744 }, 00:10:24.744 "auth": { 00:10:24.744 "state": "completed", 00:10:24.744 "digest": "sha256", 00:10:24.744 "dhgroup": "ffdhe6144" 00:10:24.744 } 00:10:24.744 } 00:10:24.744 ]' 00:10:24.744 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:24.744 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:24.744 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:25.017 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:25.017 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:25.017 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:25.017 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:25.017 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:25.276 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:10:25.276 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:10:26.212 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:26.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:26.212 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:26.212 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.212 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.212 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.212 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:26.212 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:26.212 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:26.212 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:26.212 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:10:26.212 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:26.212 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:26.212 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:26.212 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:26.212 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:26.212 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:26.212 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.212 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.212 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.212 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:26.212 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:26.212 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:27.145 00:10:27.145 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:27.145 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:27.145 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:27.404 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:27.404 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:27.404 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.404 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.404 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.404 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:27.404 { 00:10:27.404 "cntlid": 41, 00:10:27.404 "qid": 0, 00:10:27.404 "state": "enabled", 00:10:27.404 "thread": "nvmf_tgt_poll_group_000", 00:10:27.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:27.404 "listen_address": { 00:10:27.404 "trtype": "TCP", 00:10:27.404 "adrfam": "IPv4", 00:10:27.404 "traddr": "10.0.0.3", 00:10:27.404 "trsvcid": "4420" 00:10:27.404 }, 00:10:27.404 "peer_address": { 00:10:27.404 "trtype": "TCP", 00:10:27.404 "adrfam": "IPv4", 00:10:27.404 "traddr": "10.0.0.1", 00:10:27.404 "trsvcid": "39414" 00:10:27.404 }, 00:10:27.404 "auth": { 00:10:27.404 "state": "completed", 00:10:27.404 "digest": "sha256", 00:10:27.404 "dhgroup": "ffdhe8192" 00:10:27.404 } 00:10:27.404 } 00:10:27.404 ]' 00:10:27.404 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:27.404 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:27.404 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:27.664 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:27.664 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:27.664 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:27.664 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:27.664 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:27.922 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:10:27.922 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:10:28.858 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:28.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:28.858 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:28.858 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.858 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.858 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.858 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:28.858 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:28.858 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:28.858 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:10:28.858 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:28.858 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:28.858 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:28.858 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:28.858 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:28.858 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:28.859 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.859 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.859 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.859 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:28.859 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:28.859 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:29.795 00:10:29.795 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:29.795 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:29.795 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:30.053 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:30.053 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:30.053 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.053 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.053 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.054 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:30.054 { 00:10:30.054 "cntlid": 43, 00:10:30.054 "qid": 0, 00:10:30.054 "state": "enabled", 00:10:30.054 "thread": "nvmf_tgt_poll_group_000", 00:10:30.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:30.054 "listen_address": { 00:10:30.054 "trtype": "TCP", 00:10:30.054 "adrfam": "IPv4", 00:10:30.054 "traddr": "10.0.0.3", 00:10:30.054 "trsvcid": "4420" 00:10:30.054 }, 00:10:30.054 "peer_address": { 00:10:30.054 "trtype": "TCP", 00:10:30.054 "adrfam": "IPv4", 00:10:30.054 "traddr": "10.0.0.1", 00:10:30.054 "trsvcid": "55528" 00:10:30.054 }, 00:10:30.054 "auth": { 00:10:30.054 "state": "completed", 00:10:30.054 "digest": "sha256", 00:10:30.054 "dhgroup": "ffdhe8192" 00:10:30.054 } 00:10:30.054 } 00:10:30.054 ]' 00:10:30.054 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:30.054 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:30.054 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:30.054 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:30.054 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:30.054 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:30.054 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:30.054 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:30.313 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:10:30.313 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:10:31.249 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:31.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:31.249 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:31.249 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.249 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.249 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.249 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:31.249 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:31.249 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:31.507 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:10:31.507 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:31.507 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:31.507 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:31.507 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:31.507 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:31.507 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:31.507 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.507 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.507 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.507 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:31.507 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:31.507 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:32.440 00:10:32.440 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:32.440 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.440 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:33.007 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:33.007 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:33.007 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.007 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.007 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.007 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:33.007 { 00:10:33.007 "cntlid": 45, 00:10:33.007 "qid": 0, 00:10:33.007 "state": "enabled", 00:10:33.007 "thread": "nvmf_tgt_poll_group_000", 00:10:33.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:33.007 "listen_address": { 00:10:33.007 "trtype": "TCP", 00:10:33.007 "adrfam": "IPv4", 00:10:33.007 "traddr": "10.0.0.3", 00:10:33.007 "trsvcid": "4420" 00:10:33.007 }, 00:10:33.007 "peer_address": { 00:10:33.007 "trtype": "TCP", 00:10:33.007 "adrfam": "IPv4", 00:10:33.007 "traddr": "10.0.0.1", 00:10:33.007 "trsvcid": "55556" 00:10:33.007 }, 00:10:33.007 "auth": { 00:10:33.007 "state": "completed", 00:10:33.007 "digest": "sha256", 00:10:33.007 "dhgroup": "ffdhe8192" 00:10:33.007 } 00:10:33.007 } 00:10:33.007 ]' 00:10:33.007 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:33.007 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:33.007 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:33.007 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:33.007 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:33.007 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:33.007 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:33.007 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:33.574 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:10:33.574 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:10:34.140 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:34.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:34.140 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:34.140 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.140 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.140 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.140 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:34.140 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:34.140 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:34.708 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:10:34.708 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:34.708 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:34.708 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:34.708 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:34.708 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:34.708 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key3 00:10:34.708 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.708 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.708 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.708 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:34.708 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:34.708 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:35.275 00:10:35.275 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:35.275 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:35.275 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:35.534 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:35.534 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:35.534 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.534 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.534 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.534 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:35.534 { 00:10:35.534 "cntlid": 47, 00:10:35.534 "qid": 0, 00:10:35.534 "state": "enabled", 00:10:35.534 "thread": "nvmf_tgt_poll_group_000", 00:10:35.534 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:35.534 "listen_address": { 00:10:35.534 "trtype": "TCP", 00:10:35.534 "adrfam": "IPv4", 00:10:35.534 "traddr": "10.0.0.3", 00:10:35.534 "trsvcid": "4420" 00:10:35.534 }, 00:10:35.534 "peer_address": { 00:10:35.534 "trtype": "TCP", 00:10:35.534 "adrfam": "IPv4", 00:10:35.534 "traddr": "10.0.0.1", 00:10:35.534 "trsvcid": "55576" 00:10:35.534 }, 00:10:35.534 "auth": { 00:10:35.534 "state": "completed", 00:10:35.534 "digest": "sha256", 00:10:35.534 "dhgroup": "ffdhe8192" 00:10:35.534 } 00:10:35.534 } 00:10:35.534 ]' 00:10:35.534 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:35.534 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:35.534 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:35.534 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:35.534 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:35.792 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:35.792 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:35.792 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:36.051 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:10:36.051 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:10:36.650 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:36.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:36.650 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:36.650 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.650 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.650 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.650 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:36.650 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:36.650 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:36.650 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:36.650 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:36.909 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:10:36.909 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:36.909 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:36.909 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:36.909 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:36.909 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:36.909 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.909 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.909 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.909 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.909 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.909 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.909 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:37.168 00:10:37.168 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:37.168 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:37.168 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:37.736 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:37.736 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:37.736 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.736 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.736 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.736 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:37.736 { 00:10:37.736 "cntlid": 49, 00:10:37.736 "qid": 0, 00:10:37.736 "state": "enabled", 00:10:37.736 "thread": "nvmf_tgt_poll_group_000", 00:10:37.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:37.736 "listen_address": { 00:10:37.736 "trtype": "TCP", 00:10:37.736 "adrfam": "IPv4", 00:10:37.736 "traddr": "10.0.0.3", 00:10:37.736 "trsvcid": "4420" 00:10:37.736 }, 00:10:37.736 "peer_address": { 00:10:37.736 "trtype": "TCP", 00:10:37.736 "adrfam": "IPv4", 00:10:37.736 "traddr": "10.0.0.1", 00:10:37.736 "trsvcid": "55602" 00:10:37.736 }, 00:10:37.736 "auth": { 00:10:37.736 "state": "completed", 00:10:37.736 "digest": "sha384", 00:10:37.736 "dhgroup": "null" 00:10:37.736 } 00:10:37.736 } 00:10:37.736 ]' 00:10:37.736 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:37.736 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:37.736 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:37.736 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:37.736 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:37.736 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:37.736 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:37.736 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:37.995 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:10:37.995 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:10:38.932 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.932 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:38.932 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.932 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.932 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.932 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:38.932 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:38.932 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:39.189 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:10:39.189 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:39.189 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:39.189 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:39.189 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:39.189 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:39.189 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:39.189 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.189 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.189 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.189 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:39.189 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:39.189 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:39.447 00:10:39.447 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:39.447 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:39.447 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:39.706 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.706 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.706 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.706 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.706 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.706 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:39.706 { 00:10:39.706 "cntlid": 51, 00:10:39.706 "qid": 0, 00:10:39.706 "state": "enabled", 00:10:39.706 "thread": "nvmf_tgt_poll_group_000", 00:10:39.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:39.706 "listen_address": { 00:10:39.706 "trtype": "TCP", 00:10:39.706 "adrfam": "IPv4", 00:10:39.706 "traddr": "10.0.0.3", 00:10:39.706 "trsvcid": "4420" 00:10:39.706 }, 00:10:39.706 "peer_address": { 00:10:39.706 "trtype": "TCP", 00:10:39.706 "adrfam": "IPv4", 00:10:39.706 "traddr": "10.0.0.1", 00:10:39.706 "trsvcid": "45404" 00:10:39.706 }, 00:10:39.706 "auth": { 00:10:39.706 "state": "completed", 00:10:39.706 "digest": "sha384", 00:10:39.706 "dhgroup": "null" 00:10:39.706 } 00:10:39.706 } 00:10:39.706 ]' 00:10:39.706 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:39.964 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:39.964 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:39.964 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:39.964 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:39.964 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:39.964 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:39.964 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.222 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:10:40.222 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:10:41.157 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:41.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:41.157 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:41.157 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.157 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.157 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.157 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:41.157 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:41.157 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:41.417 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:10:41.417 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:41.417 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:41.417 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:41.417 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:41.417 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:41.417 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:41.417 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.417 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.417 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.417 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:41.417 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:41.417 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:41.675 00:10:41.675 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:41.675 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:41.675 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:41.935 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:41.935 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:41.935 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.935 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.935 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.935 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:41.935 { 00:10:41.935 "cntlid": 53, 00:10:41.935 "qid": 0, 00:10:41.935 "state": "enabled", 00:10:41.935 "thread": "nvmf_tgt_poll_group_000", 00:10:41.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:41.935 "listen_address": { 00:10:41.935 "trtype": "TCP", 00:10:41.935 "adrfam": "IPv4", 00:10:41.935 "traddr": "10.0.0.3", 00:10:41.935 "trsvcid": "4420" 00:10:41.935 }, 00:10:41.935 "peer_address": { 00:10:41.935 "trtype": "TCP", 00:10:41.935 "adrfam": "IPv4", 00:10:41.935 "traddr": "10.0.0.1", 00:10:41.935 "trsvcid": "45426" 00:10:41.935 }, 00:10:41.935 "auth": { 00:10:41.935 "state": "completed", 00:10:41.935 "digest": "sha384", 00:10:41.935 "dhgroup": "null" 00:10:41.935 } 00:10:41.935 } 00:10:41.935 ]' 00:10:41.935 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:41.935 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:41.935 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:41.935 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:41.935 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:42.195 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:42.195 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:42.195 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:42.453 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:10:42.453 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:10:43.021 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:43.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:43.280 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:43.280 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.280 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.280 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.280 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:43.280 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:43.280 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:43.538 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:10:43.538 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:43.538 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:43.538 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:43.538 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:43.538 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:43.538 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key3 00:10:43.539 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.539 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.539 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.539 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:43.539 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:43.539 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:43.797 00:10:43.797 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:43.797 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:43.797 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:44.055 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.055 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.055 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.055 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.055 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.055 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:44.055 { 00:10:44.055 "cntlid": 55, 00:10:44.055 "qid": 0, 00:10:44.055 "state": "enabled", 00:10:44.055 "thread": "nvmf_tgt_poll_group_000", 00:10:44.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:44.055 "listen_address": { 00:10:44.055 "trtype": "TCP", 00:10:44.055 "adrfam": "IPv4", 00:10:44.055 "traddr": "10.0.0.3", 00:10:44.055 "trsvcid": "4420" 00:10:44.055 }, 00:10:44.055 "peer_address": { 00:10:44.055 "trtype": "TCP", 00:10:44.055 "adrfam": "IPv4", 00:10:44.055 "traddr": "10.0.0.1", 00:10:44.055 "trsvcid": "45448" 00:10:44.055 }, 00:10:44.055 "auth": { 00:10:44.055 "state": "completed", 00:10:44.055 "digest": "sha384", 00:10:44.055 "dhgroup": "null" 00:10:44.055 } 00:10:44.055 } 00:10:44.055 ]' 00:10:44.055 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:44.055 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:44.055 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:44.313 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:44.313 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:44.313 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:44.313 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:44.313 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:44.571 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:10:44.571 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:10:45.151 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:45.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:45.151 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:45.151 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.152 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.152 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.152 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:45.152 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:45.152 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:45.152 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:45.721 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:10:45.721 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:45.721 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:45.721 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:45.721 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:45.721 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:45.721 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.721 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.721 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.721 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.721 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.721 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.721 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.979 00:10:45.979 20:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:45.979 20:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:45.979 20:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:46.238 20:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:46.238 20:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:46.238 20:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.238 20:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.238 20:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.238 20:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:46.238 { 00:10:46.238 "cntlid": 57, 00:10:46.238 "qid": 0, 00:10:46.238 "state": "enabled", 00:10:46.238 "thread": "nvmf_tgt_poll_group_000", 00:10:46.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:46.238 "listen_address": { 00:10:46.238 "trtype": "TCP", 00:10:46.238 "adrfam": "IPv4", 00:10:46.238 "traddr": "10.0.0.3", 00:10:46.238 "trsvcid": "4420" 00:10:46.238 }, 00:10:46.238 "peer_address": { 00:10:46.238 "trtype": "TCP", 00:10:46.238 "adrfam": "IPv4", 00:10:46.238 "traddr": "10.0.0.1", 00:10:46.238 "trsvcid": "45466" 00:10:46.238 }, 00:10:46.238 "auth": { 00:10:46.238 "state": "completed", 00:10:46.238 "digest": "sha384", 00:10:46.238 "dhgroup": "ffdhe2048" 00:10:46.238 } 00:10:46.238 } 00:10:46.238 ]' 00:10:46.238 20:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:46.238 20:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:46.238 20:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:46.238 20:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:46.238 20:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:46.238 20:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.238 20:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.238 20:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:46.497 20:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:10:46.497 20:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:10:47.432 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:47.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:47.432 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:47.432 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.432 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.432 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.432 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:47.432 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:47.432 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:47.690 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:10:47.690 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:47.690 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:47.690 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:47.690 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:47.690 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:47.690 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.690 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.690 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.690 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.690 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.690 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.690 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.947 00:10:47.947 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:47.947 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:47.947 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:48.205 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:48.205 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:48.205 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.205 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.205 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.205 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:48.205 { 00:10:48.205 "cntlid": 59, 00:10:48.205 "qid": 0, 00:10:48.205 "state": "enabled", 00:10:48.205 "thread": "nvmf_tgt_poll_group_000", 00:10:48.205 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:48.205 "listen_address": { 00:10:48.205 "trtype": "TCP", 00:10:48.205 "adrfam": "IPv4", 00:10:48.205 "traddr": "10.0.0.3", 00:10:48.205 "trsvcid": "4420" 00:10:48.205 }, 00:10:48.205 "peer_address": { 00:10:48.205 "trtype": "TCP", 00:10:48.205 "adrfam": "IPv4", 00:10:48.205 "traddr": "10.0.0.1", 00:10:48.205 "trsvcid": "45506" 00:10:48.205 }, 00:10:48.205 "auth": { 00:10:48.205 "state": "completed", 00:10:48.205 "digest": "sha384", 00:10:48.205 "dhgroup": "ffdhe2048" 00:10:48.205 } 00:10:48.205 } 00:10:48.205 ]' 00:10:48.205 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:48.463 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:48.463 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:48.463 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:48.463 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:48.463 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:48.463 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:48.463 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:48.732 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:10:48.732 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:10:49.679 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:49.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:49.679 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:49.679 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.679 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.679 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.679 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:49.679 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:49.679 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:49.679 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:10:49.679 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:49.679 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:49.679 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:49.679 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:49.679 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:49.679 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.679 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.679 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.679 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.679 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.679 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.679 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:50.246 00:10:50.246 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:50.246 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:50.246 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.504 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:50.504 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:50.504 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.504 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.504 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.504 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:50.504 { 00:10:50.504 "cntlid": 61, 00:10:50.504 "qid": 0, 00:10:50.504 "state": "enabled", 00:10:50.504 "thread": "nvmf_tgt_poll_group_000", 00:10:50.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:50.504 "listen_address": { 00:10:50.504 "trtype": "TCP", 00:10:50.504 "adrfam": "IPv4", 00:10:50.504 "traddr": "10.0.0.3", 00:10:50.504 "trsvcid": "4420" 00:10:50.504 }, 00:10:50.504 "peer_address": { 00:10:50.504 "trtype": "TCP", 00:10:50.504 "adrfam": "IPv4", 00:10:50.504 "traddr": "10.0.0.1", 00:10:50.504 "trsvcid": "35648" 00:10:50.504 }, 00:10:50.504 "auth": { 00:10:50.504 "state": "completed", 00:10:50.504 "digest": "sha384", 00:10:50.504 "dhgroup": "ffdhe2048" 00:10:50.504 } 00:10:50.504 } 00:10:50.504 ]' 00:10:50.504 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:50.504 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:50.504 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:50.504 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:50.504 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:50.762 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:50.762 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:50.762 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.021 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:10:51.021 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:10:51.587 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:51.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:51.587 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:51.587 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.587 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.587 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.587 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:51.587 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:51.587 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:52.154 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:10:52.154 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:52.154 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:52.154 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:52.154 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:52.154 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.154 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key3 00:10:52.154 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.154 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.154 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.154 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:52.154 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:52.154 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:52.412 00:10:52.412 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:52.412 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:52.412 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:52.672 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:52.672 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:52.672 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.672 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.672 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.672 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:52.672 { 00:10:52.672 "cntlid": 63, 00:10:52.672 "qid": 0, 00:10:52.672 "state": "enabled", 00:10:52.672 "thread": "nvmf_tgt_poll_group_000", 00:10:52.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:52.672 "listen_address": { 00:10:52.672 "trtype": "TCP", 00:10:52.672 "adrfam": "IPv4", 00:10:52.672 "traddr": "10.0.0.3", 00:10:52.672 "trsvcid": "4420" 00:10:52.672 }, 00:10:52.672 "peer_address": { 00:10:52.672 "trtype": "TCP", 00:10:52.672 "adrfam": "IPv4", 00:10:52.672 "traddr": "10.0.0.1", 00:10:52.672 "trsvcid": "35672" 00:10:52.672 }, 00:10:52.672 "auth": { 00:10:52.672 "state": "completed", 00:10:52.672 "digest": "sha384", 00:10:52.672 "dhgroup": "ffdhe2048" 00:10:52.672 } 00:10:52.672 } 00:10:52.672 ]' 00:10:52.672 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:52.672 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:52.672 20:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:52.672 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:52.672 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:52.942 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:52.942 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:52.942 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.202 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:10:53.202 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:10:53.769 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:53.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:53.769 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:53.769 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.769 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.769 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.769 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:53.769 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:53.769 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:53.770 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:54.336 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:10:54.336 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:54.336 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:54.336 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:54.336 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:54.336 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.336 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:54.336 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.336 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.336 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.336 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:54.336 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:54.336 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:54.595 00:10:54.595 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:54.595 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:54.595 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:54.853 20:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.853 20:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.853 20:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.853 20:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.853 20:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.853 20:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:54.853 { 00:10:54.853 "cntlid": 65, 00:10:54.853 "qid": 0, 00:10:54.853 "state": "enabled", 00:10:54.853 "thread": "nvmf_tgt_poll_group_000", 00:10:54.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:54.853 "listen_address": { 00:10:54.853 "trtype": "TCP", 00:10:54.853 "adrfam": "IPv4", 00:10:54.853 "traddr": "10.0.0.3", 00:10:54.853 "trsvcid": "4420" 00:10:54.853 }, 00:10:54.853 "peer_address": { 00:10:54.853 "trtype": "TCP", 00:10:54.853 "adrfam": "IPv4", 00:10:54.853 "traddr": "10.0.0.1", 00:10:54.853 "trsvcid": "35716" 00:10:54.853 }, 00:10:54.853 "auth": { 00:10:54.853 "state": "completed", 00:10:54.853 "digest": "sha384", 00:10:54.853 "dhgroup": "ffdhe3072" 00:10:54.853 } 00:10:54.853 } 00:10:54.853 ]' 00:10:54.853 20:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:55.112 20:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:55.112 20:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:55.112 20:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:55.112 20:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:55.112 20:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.112 20:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.112 20:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.371 20:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:10:55.371 20:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:10:56.307 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.307 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:56.307 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.307 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.307 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.307 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:56.307 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:56.307 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:56.307 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:10:56.307 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:56.307 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:56.307 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:56.307 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:56.307 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.307 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:56.307 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.307 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.307 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.307 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:56.307 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:56.307 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:56.874 00:10:56.874 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:56.874 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:56.874 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.136 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.136 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.136 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.136 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.136 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.136 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:57.136 { 00:10:57.136 "cntlid": 67, 00:10:57.136 "qid": 0, 00:10:57.136 "state": "enabled", 00:10:57.136 "thread": "nvmf_tgt_poll_group_000", 00:10:57.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:57.136 "listen_address": { 00:10:57.136 "trtype": "TCP", 00:10:57.136 "adrfam": "IPv4", 00:10:57.136 "traddr": "10.0.0.3", 00:10:57.136 "trsvcid": "4420" 00:10:57.136 }, 00:10:57.136 "peer_address": { 00:10:57.136 "trtype": "TCP", 00:10:57.136 "adrfam": "IPv4", 00:10:57.136 "traddr": "10.0.0.1", 00:10:57.136 "trsvcid": "35742" 00:10:57.136 }, 00:10:57.136 "auth": { 00:10:57.136 "state": "completed", 00:10:57.136 "digest": "sha384", 00:10:57.136 "dhgroup": "ffdhe3072" 00:10:57.136 } 00:10:57.136 } 00:10:57.136 ]' 00:10:57.136 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:57.136 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:57.136 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:57.136 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:57.136 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:57.136 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.136 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.136 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.709 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:10:57.709 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:10:58.277 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.277 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:10:58.277 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.277 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.277 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.277 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:58.277 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:58.277 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:58.536 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:10:58.536 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:58.536 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:58.536 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:58.536 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:58.536 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.536 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:58.536 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.536 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.536 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.536 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:58.536 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:58.536 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:58.797 00:10:59.056 20:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:59.056 20:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:59.056 20:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:59.314 20:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:59.314 20:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:59.314 20:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.314 20:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.314 20:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.314 20:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:59.314 { 00:10:59.314 "cntlid": 69, 00:10:59.314 "qid": 0, 00:10:59.314 "state": "enabled", 00:10:59.314 "thread": "nvmf_tgt_poll_group_000", 00:10:59.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:10:59.314 "listen_address": { 00:10:59.314 "trtype": "TCP", 00:10:59.314 "adrfam": "IPv4", 00:10:59.314 "traddr": "10.0.0.3", 00:10:59.314 "trsvcid": "4420" 00:10:59.314 }, 00:10:59.314 "peer_address": { 00:10:59.314 "trtype": "TCP", 00:10:59.314 "adrfam": "IPv4", 00:10:59.314 "traddr": "10.0.0.1", 00:10:59.314 "trsvcid": "46212" 00:10:59.314 }, 00:10:59.314 "auth": { 00:10:59.314 "state": "completed", 00:10:59.314 "digest": "sha384", 00:10:59.314 "dhgroup": "ffdhe3072" 00:10:59.314 } 00:10:59.314 } 00:10:59.314 ]' 00:10:59.314 20:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:59.314 20:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:59.314 20:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:59.314 20:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:59.314 20:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:59.314 20:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.314 20:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.314 20:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.574 20:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:10:59.574 20:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:11:00.511 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.511 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:00.511 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.511 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.511 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.511 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:00.511 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:00.511 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:00.770 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:11:00.770 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:00.770 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:00.770 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:00.770 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:00.770 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.770 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key3 00:11:00.770 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.770 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.770 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.770 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:00.770 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:00.770 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:01.028 00:11:01.028 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:01.028 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.028 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:01.594 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.594 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.594 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.594 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.594 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.594 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:01.594 { 00:11:01.594 "cntlid": 71, 00:11:01.594 "qid": 0, 00:11:01.594 "state": "enabled", 00:11:01.594 "thread": "nvmf_tgt_poll_group_000", 00:11:01.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:01.594 "listen_address": { 00:11:01.594 "trtype": "TCP", 00:11:01.594 "adrfam": "IPv4", 00:11:01.594 "traddr": "10.0.0.3", 00:11:01.594 "trsvcid": "4420" 00:11:01.594 }, 00:11:01.594 "peer_address": { 00:11:01.594 "trtype": "TCP", 00:11:01.594 "adrfam": "IPv4", 00:11:01.594 "traddr": "10.0.0.1", 00:11:01.594 "trsvcid": "46234" 00:11:01.594 }, 00:11:01.594 "auth": { 00:11:01.594 "state": "completed", 00:11:01.594 "digest": "sha384", 00:11:01.594 "dhgroup": "ffdhe3072" 00:11:01.594 } 00:11:01.594 } 00:11:01.594 ]' 00:11:01.594 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:01.594 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:01.594 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:01.594 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:01.594 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:01.594 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.594 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.594 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.852 20:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:11:01.852 20:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:11:02.785 20:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:02.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:02.785 20:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:02.785 20:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.785 20:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.785 20:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.785 20:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:02.785 20:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:02.785 20:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:02.785 20:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:02.785 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:11:02.785 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:02.785 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:02.785 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:02.785 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:03.044 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.044 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:03.044 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.044 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.044 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.044 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:03.044 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:03.044 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:03.302 00:11:03.302 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:03.302 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:03.302 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.561 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.561 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.561 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.561 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.561 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.561 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:03.561 { 00:11:03.561 "cntlid": 73, 00:11:03.561 "qid": 0, 00:11:03.561 "state": "enabled", 00:11:03.561 "thread": "nvmf_tgt_poll_group_000", 00:11:03.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:03.561 "listen_address": { 00:11:03.561 "trtype": "TCP", 00:11:03.561 "adrfam": "IPv4", 00:11:03.561 "traddr": "10.0.0.3", 00:11:03.561 "trsvcid": "4420" 00:11:03.561 }, 00:11:03.561 "peer_address": { 00:11:03.561 "trtype": "TCP", 00:11:03.561 "adrfam": "IPv4", 00:11:03.561 "traddr": "10.0.0.1", 00:11:03.561 "trsvcid": "46266" 00:11:03.561 }, 00:11:03.561 "auth": { 00:11:03.561 "state": "completed", 00:11:03.561 "digest": "sha384", 00:11:03.561 "dhgroup": "ffdhe4096" 00:11:03.561 } 00:11:03.561 } 00:11:03.561 ]' 00:11:03.561 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:03.820 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:03.820 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:03.820 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:03.820 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:03.820 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.820 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.820 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.079 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:11:04.079 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:11:04.699 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.699 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:04.699 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.699 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.957 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.957 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:04.957 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:04.957 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:05.216 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:11:05.216 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:05.216 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:05.216 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:05.216 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:05.216 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.216 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:05.216 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.216 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.216 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.216 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:05.216 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:05.216 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:05.475 00:11:05.475 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:05.475 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:05.475 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.068 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.068 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.068 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.068 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.068 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.068 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:06.068 { 00:11:06.068 "cntlid": 75, 00:11:06.068 "qid": 0, 00:11:06.068 "state": "enabled", 00:11:06.068 "thread": "nvmf_tgt_poll_group_000", 00:11:06.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:06.068 "listen_address": { 00:11:06.068 "trtype": "TCP", 00:11:06.068 "adrfam": "IPv4", 00:11:06.068 "traddr": "10.0.0.3", 00:11:06.068 "trsvcid": "4420" 00:11:06.068 }, 00:11:06.068 "peer_address": { 00:11:06.068 "trtype": "TCP", 00:11:06.068 "adrfam": "IPv4", 00:11:06.068 "traddr": "10.0.0.1", 00:11:06.068 "trsvcid": "46304" 00:11:06.068 }, 00:11:06.068 "auth": { 00:11:06.068 "state": "completed", 00:11:06.068 "digest": "sha384", 00:11:06.068 "dhgroup": "ffdhe4096" 00:11:06.068 } 00:11:06.068 } 00:11:06.068 ]' 00:11:06.068 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:06.068 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:06.068 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:06.068 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:06.068 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:06.068 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.068 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.068 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.327 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:11:06.327 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:11:06.897 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.897 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:06.897 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.897 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.897 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.897 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:06.897 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:06.897 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:07.156 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:11:07.156 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:07.156 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:07.156 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:07.156 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:07.156 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.156 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:07.156 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.156 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.156 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.156 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:07.156 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:07.156 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:07.722 00:11:07.722 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:07.722 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.722 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:08.041 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.041 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.041 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.041 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.041 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.041 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:08.041 { 00:11:08.041 "cntlid": 77, 00:11:08.041 "qid": 0, 00:11:08.041 "state": "enabled", 00:11:08.041 "thread": "nvmf_tgt_poll_group_000", 00:11:08.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:08.041 "listen_address": { 00:11:08.041 "trtype": "TCP", 00:11:08.041 "adrfam": "IPv4", 00:11:08.041 "traddr": "10.0.0.3", 00:11:08.041 "trsvcid": "4420" 00:11:08.041 }, 00:11:08.041 "peer_address": { 00:11:08.041 "trtype": "TCP", 00:11:08.041 "adrfam": "IPv4", 00:11:08.041 "traddr": "10.0.0.1", 00:11:08.041 "trsvcid": "46332" 00:11:08.041 }, 00:11:08.041 "auth": { 00:11:08.041 "state": "completed", 00:11:08.041 "digest": "sha384", 00:11:08.041 "dhgroup": "ffdhe4096" 00:11:08.041 } 00:11:08.041 } 00:11:08.041 ]' 00:11:08.041 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:08.041 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:08.041 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:08.041 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:08.041 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:08.041 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.041 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.041 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.315 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:11:08.316 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:11:09.251 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.252 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:09.252 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.252 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.252 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.252 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:09.252 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:09.252 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:09.252 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:11:09.252 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:09.252 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:09.252 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:09.252 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:09.252 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.252 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key3 00:11:09.252 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.252 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.252 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.252 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:09.252 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:09.252 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:09.818 00:11:09.818 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:09.818 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.818 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:10.077 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.077 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.077 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.077 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.077 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.077 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:10.077 { 00:11:10.077 "cntlid": 79, 00:11:10.077 "qid": 0, 00:11:10.077 "state": "enabled", 00:11:10.077 "thread": "nvmf_tgt_poll_group_000", 00:11:10.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:10.077 "listen_address": { 00:11:10.077 "trtype": "TCP", 00:11:10.077 "adrfam": "IPv4", 00:11:10.077 "traddr": "10.0.0.3", 00:11:10.077 "trsvcid": "4420" 00:11:10.077 }, 00:11:10.077 "peer_address": { 00:11:10.077 "trtype": "TCP", 00:11:10.077 "adrfam": "IPv4", 00:11:10.077 "traddr": "10.0.0.1", 00:11:10.077 "trsvcid": "51772" 00:11:10.077 }, 00:11:10.077 "auth": { 00:11:10.077 "state": "completed", 00:11:10.077 "digest": "sha384", 00:11:10.077 "dhgroup": "ffdhe4096" 00:11:10.077 } 00:11:10.077 } 00:11:10.077 ]' 00:11:10.077 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:10.077 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:10.077 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:10.077 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:10.077 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:10.077 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.077 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.077 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:10.336 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:11:10.336 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:11:11.272 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.272 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:11.272 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.272 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.272 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.272 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:11.272 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:11.272 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:11.272 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:11.531 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:11:11.531 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:11.531 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:11.531 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:11.531 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:11.531 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.531 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:11.531 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.531 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.531 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.531 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:11.531 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:11.531 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.098 00:11:12.098 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:12.098 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:12.098 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.356 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.356 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.356 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.356 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.356 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.356 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:12.356 { 00:11:12.356 "cntlid": 81, 00:11:12.356 "qid": 0, 00:11:12.356 "state": "enabled", 00:11:12.356 "thread": "nvmf_tgt_poll_group_000", 00:11:12.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:12.356 "listen_address": { 00:11:12.356 "trtype": "TCP", 00:11:12.356 "adrfam": "IPv4", 00:11:12.356 "traddr": "10.0.0.3", 00:11:12.356 "trsvcid": "4420" 00:11:12.356 }, 00:11:12.356 "peer_address": { 00:11:12.356 "trtype": "TCP", 00:11:12.356 "adrfam": "IPv4", 00:11:12.356 "traddr": "10.0.0.1", 00:11:12.356 "trsvcid": "51804" 00:11:12.356 }, 00:11:12.356 "auth": { 00:11:12.356 "state": "completed", 00:11:12.356 "digest": "sha384", 00:11:12.356 "dhgroup": "ffdhe6144" 00:11:12.356 } 00:11:12.356 } 00:11:12.356 ]' 00:11:12.356 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:12.356 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:12.356 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:12.356 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:12.356 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:12.356 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.356 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.356 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.923 20:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:11:12.923 20:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:11:13.491 20:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.491 20:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:13.491 20:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.491 20:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.491 20:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.491 20:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:13.491 20:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:13.491 20:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:13.749 20:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:11:13.749 20:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:13.749 20:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:13.749 20:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:13.749 20:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:13.749 20:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.749 20:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.749 20:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.749 20:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.749 20:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.749 20:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.749 20:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.749 20:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.316 00:11:14.316 20:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:14.316 20:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:14.316 20:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.576 20:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.576 20:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.576 20:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.576 20:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.576 20:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.576 20:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:14.576 { 00:11:14.576 "cntlid": 83, 00:11:14.576 "qid": 0, 00:11:14.576 "state": "enabled", 00:11:14.576 "thread": "nvmf_tgt_poll_group_000", 00:11:14.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:14.576 "listen_address": { 00:11:14.576 "trtype": "TCP", 00:11:14.576 "adrfam": "IPv4", 00:11:14.576 "traddr": "10.0.0.3", 00:11:14.576 "trsvcid": "4420" 00:11:14.576 }, 00:11:14.576 "peer_address": { 00:11:14.576 "trtype": "TCP", 00:11:14.576 "adrfam": "IPv4", 00:11:14.576 "traddr": "10.0.0.1", 00:11:14.576 "trsvcid": "51822" 00:11:14.576 }, 00:11:14.576 "auth": { 00:11:14.576 "state": "completed", 00:11:14.576 "digest": "sha384", 00:11:14.576 "dhgroup": "ffdhe6144" 00:11:14.576 } 00:11:14.576 } 00:11:14.576 ]' 00:11:14.576 20:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:14.576 20:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:14.576 20:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:14.576 20:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:14.576 20:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:14.864 20:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.864 20:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.864 20:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.123 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:11:15.124 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:11:15.690 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.690 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:15.690 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.690 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.690 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.690 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:15.690 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:15.690 20:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:15.949 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:11:15.949 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:15.949 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:15.949 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:15.949 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:15.949 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.949 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.949 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.949 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.949 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.949 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.949 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.949 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.515 00:11:16.515 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:16.515 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.515 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:16.774 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.774 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.774 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.774 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.774 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.774 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:16.774 { 00:11:16.774 "cntlid": 85, 00:11:16.774 "qid": 0, 00:11:16.774 "state": "enabled", 00:11:16.774 "thread": "nvmf_tgt_poll_group_000", 00:11:16.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:16.774 "listen_address": { 00:11:16.774 "trtype": "TCP", 00:11:16.774 "adrfam": "IPv4", 00:11:16.774 "traddr": "10.0.0.3", 00:11:16.774 "trsvcid": "4420" 00:11:16.774 }, 00:11:16.774 "peer_address": { 00:11:16.774 "trtype": "TCP", 00:11:16.774 "adrfam": "IPv4", 00:11:16.774 "traddr": "10.0.0.1", 00:11:16.774 "trsvcid": "51856" 00:11:16.774 }, 00:11:16.774 "auth": { 00:11:16.774 "state": "completed", 00:11:16.774 "digest": "sha384", 00:11:16.774 "dhgroup": "ffdhe6144" 00:11:16.774 } 00:11:16.774 } 00:11:16.774 ]' 00:11:16.774 20:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:16.774 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:16.774 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:16.774 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:16.774 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:17.033 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.033 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.033 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.291 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:11:17.291 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:11:17.857 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.857 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:17.857 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.857 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.857 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.857 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:17.857 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:17.857 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:18.116 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:11:18.116 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:18.116 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:18.116 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:18.116 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:18.116 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.116 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key3 00:11:18.116 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.116 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.116 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.116 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:18.116 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:18.116 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:18.684 00:11:18.684 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:18.684 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:18.684 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.942 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.942 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.942 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.942 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.942 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.942 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:18.942 { 00:11:18.942 "cntlid": 87, 00:11:18.942 "qid": 0, 00:11:18.942 "state": "enabled", 00:11:18.942 "thread": "nvmf_tgt_poll_group_000", 00:11:18.942 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:18.942 "listen_address": { 00:11:18.942 "trtype": "TCP", 00:11:18.942 "adrfam": "IPv4", 00:11:18.942 "traddr": "10.0.0.3", 00:11:18.942 "trsvcid": "4420" 00:11:18.942 }, 00:11:18.942 "peer_address": { 00:11:18.942 "trtype": "TCP", 00:11:18.942 "adrfam": "IPv4", 00:11:18.942 "traddr": "10.0.0.1", 00:11:18.942 "trsvcid": "50012" 00:11:18.942 }, 00:11:18.942 "auth": { 00:11:18.942 "state": "completed", 00:11:18.942 "digest": "sha384", 00:11:18.942 "dhgroup": "ffdhe6144" 00:11:18.942 } 00:11:18.942 } 00:11:18.942 ]' 00:11:18.942 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:18.942 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:18.942 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:19.218 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:19.218 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:19.218 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.218 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.218 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.481 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:11:19.481 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:11:20.048 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.048 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:20.048 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.048 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.048 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.048 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:20.048 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:20.048 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:20.048 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:20.307 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:11:20.307 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:20.307 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:20.307 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:20.307 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:20.307 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.307 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.307 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.307 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.307 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.307 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.307 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.307 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.243 00:11:21.243 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:21.243 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:21.243 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.501 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.501 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.501 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.501 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.501 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.501 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:21.501 { 00:11:21.501 "cntlid": 89, 00:11:21.501 "qid": 0, 00:11:21.501 "state": "enabled", 00:11:21.501 "thread": "nvmf_tgt_poll_group_000", 00:11:21.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:21.501 "listen_address": { 00:11:21.501 "trtype": "TCP", 00:11:21.501 "adrfam": "IPv4", 00:11:21.501 "traddr": "10.0.0.3", 00:11:21.501 "trsvcid": "4420" 00:11:21.501 }, 00:11:21.501 "peer_address": { 00:11:21.501 "trtype": "TCP", 00:11:21.501 "adrfam": "IPv4", 00:11:21.501 "traddr": "10.0.0.1", 00:11:21.501 "trsvcid": "50058" 00:11:21.501 }, 00:11:21.501 "auth": { 00:11:21.501 "state": "completed", 00:11:21.501 "digest": "sha384", 00:11:21.501 "dhgroup": "ffdhe8192" 00:11:21.501 } 00:11:21.501 } 00:11:21.501 ]' 00:11:21.501 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:21.501 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:21.501 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:21.501 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:21.501 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:21.501 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.501 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.501 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.069 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:11:22.069 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:11:22.636 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.636 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:22.636 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.636 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.636 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.636 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:22.636 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:22.636 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:22.895 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:11:22.895 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:22.895 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:22.895 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:22.895 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:22.895 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.895 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.895 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.895 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.895 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.895 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.895 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.895 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.518 00:11:23.777 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:23.777 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:23.777 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.036 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.036 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.036 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.036 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.036 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.036 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:24.036 { 00:11:24.036 "cntlid": 91, 00:11:24.036 "qid": 0, 00:11:24.036 "state": "enabled", 00:11:24.036 "thread": "nvmf_tgt_poll_group_000", 00:11:24.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:24.036 "listen_address": { 00:11:24.036 "trtype": "TCP", 00:11:24.036 "adrfam": "IPv4", 00:11:24.036 "traddr": "10.0.0.3", 00:11:24.036 "trsvcid": "4420" 00:11:24.036 }, 00:11:24.036 "peer_address": { 00:11:24.036 "trtype": "TCP", 00:11:24.036 "adrfam": "IPv4", 00:11:24.036 "traddr": "10.0.0.1", 00:11:24.036 "trsvcid": "50080" 00:11:24.036 }, 00:11:24.036 "auth": { 00:11:24.036 "state": "completed", 00:11:24.036 "digest": "sha384", 00:11:24.036 "dhgroup": "ffdhe8192" 00:11:24.036 } 00:11:24.036 } 00:11:24.036 ]' 00:11:24.036 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:24.036 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:24.036 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:24.036 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:24.036 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:24.036 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.036 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.036 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.297 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:11:24.297 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:11:25.232 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.232 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:25.232 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.232 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.232 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.232 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:25.232 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:25.232 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:25.490 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:11:25.490 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:25.490 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:25.490 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:25.490 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:25.490 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.490 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.490 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.490 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.490 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.490 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.490 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.490 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.055 00:11:26.055 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:26.055 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.055 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:26.619 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.619 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.620 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.620 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.620 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.620 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:26.620 { 00:11:26.620 "cntlid": 93, 00:11:26.620 "qid": 0, 00:11:26.620 "state": "enabled", 00:11:26.620 "thread": "nvmf_tgt_poll_group_000", 00:11:26.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:26.620 "listen_address": { 00:11:26.620 "trtype": "TCP", 00:11:26.620 "adrfam": "IPv4", 00:11:26.620 "traddr": "10.0.0.3", 00:11:26.620 "trsvcid": "4420" 00:11:26.620 }, 00:11:26.620 "peer_address": { 00:11:26.620 "trtype": "TCP", 00:11:26.620 "adrfam": "IPv4", 00:11:26.620 "traddr": "10.0.0.1", 00:11:26.620 "trsvcid": "50114" 00:11:26.620 }, 00:11:26.620 "auth": { 00:11:26.620 "state": "completed", 00:11:26.620 "digest": "sha384", 00:11:26.620 "dhgroup": "ffdhe8192" 00:11:26.620 } 00:11:26.620 } 00:11:26.620 ]' 00:11:26.620 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:26.620 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:26.620 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:26.620 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:26.620 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:26.620 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.620 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.620 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.877 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:11:26.877 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:11:27.808 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.808 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:27.808 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.808 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.808 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.808 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:27.808 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:27.808 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:28.066 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:11:28.066 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:28.066 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:28.066 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:28.066 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:28.066 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.066 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key3 00:11:28.066 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.066 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.066 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.066 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:28.066 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:28.066 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:28.632 00:11:28.632 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:28.632 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:28.632 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.890 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.890 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.890 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.890 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.890 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.890 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:28.890 { 00:11:28.890 "cntlid": 95, 00:11:28.890 "qid": 0, 00:11:28.890 "state": "enabled", 00:11:28.890 "thread": "nvmf_tgt_poll_group_000", 00:11:28.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:28.890 "listen_address": { 00:11:28.890 "trtype": "TCP", 00:11:28.890 "adrfam": "IPv4", 00:11:28.890 "traddr": "10.0.0.3", 00:11:28.890 "trsvcid": "4420" 00:11:28.890 }, 00:11:28.890 "peer_address": { 00:11:28.890 "trtype": "TCP", 00:11:28.890 "adrfam": "IPv4", 00:11:28.890 "traddr": "10.0.0.1", 00:11:28.890 "trsvcid": "45170" 00:11:28.890 }, 00:11:28.890 "auth": { 00:11:28.890 "state": "completed", 00:11:28.890 "digest": "sha384", 00:11:28.890 "dhgroup": "ffdhe8192" 00:11:28.890 } 00:11:28.890 } 00:11:28.890 ]' 00:11:28.890 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:28.890 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:28.890 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:28.890 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:28.891 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.148 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.148 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.148 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.406 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:11:29.406 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:11:29.972 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.229 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:30.229 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.229 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.229 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.229 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:30.229 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:30.229 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:30.229 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:30.229 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:30.488 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:11:30.488 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.488 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:30.488 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:30.488 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:30.488 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.488 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.488 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.488 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.488 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.488 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.488 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.488 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.747 00:11:30.747 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:30.747 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.747 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:31.006 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.006 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.006 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.006 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.006 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.006 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:31.006 { 00:11:31.006 "cntlid": 97, 00:11:31.006 "qid": 0, 00:11:31.006 "state": "enabled", 00:11:31.006 "thread": "nvmf_tgt_poll_group_000", 00:11:31.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:31.006 "listen_address": { 00:11:31.006 "trtype": "TCP", 00:11:31.006 "adrfam": "IPv4", 00:11:31.006 "traddr": "10.0.0.3", 00:11:31.006 "trsvcid": "4420" 00:11:31.006 }, 00:11:31.006 "peer_address": { 00:11:31.006 "trtype": "TCP", 00:11:31.006 "adrfam": "IPv4", 00:11:31.006 "traddr": "10.0.0.1", 00:11:31.006 "trsvcid": "45200" 00:11:31.006 }, 00:11:31.006 "auth": { 00:11:31.006 "state": "completed", 00:11:31.006 "digest": "sha512", 00:11:31.006 "dhgroup": "null" 00:11:31.006 } 00:11:31.006 } 00:11:31.006 ]' 00:11:31.006 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:31.264 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:31.264 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:31.264 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:31.264 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:31.264 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.264 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.264 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.522 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:11:31.522 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:11:32.088 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.088 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:32.088 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.088 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.088 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.088 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:32.088 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:32.088 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:32.655 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:11:32.655 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:32.655 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:32.655 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:32.655 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:32.655 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.655 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.655 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.655 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.655 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.655 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.655 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.655 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.914 00:11:32.914 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:32.914 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:32.914 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.173 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.173 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.173 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.173 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.173 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.173 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:33.173 { 00:11:33.173 "cntlid": 99, 00:11:33.173 "qid": 0, 00:11:33.173 "state": "enabled", 00:11:33.173 "thread": "nvmf_tgt_poll_group_000", 00:11:33.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:33.173 "listen_address": { 00:11:33.173 "trtype": "TCP", 00:11:33.173 "adrfam": "IPv4", 00:11:33.173 "traddr": "10.0.0.3", 00:11:33.173 "trsvcid": "4420" 00:11:33.173 }, 00:11:33.173 "peer_address": { 00:11:33.173 "trtype": "TCP", 00:11:33.173 "adrfam": "IPv4", 00:11:33.173 "traddr": "10.0.0.1", 00:11:33.173 "trsvcid": "45232" 00:11:33.173 }, 00:11:33.173 "auth": { 00:11:33.173 "state": "completed", 00:11:33.173 "digest": "sha512", 00:11:33.173 "dhgroup": "null" 00:11:33.173 } 00:11:33.173 } 00:11:33.173 ]' 00:11:33.173 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:33.173 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:33.173 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:33.431 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:33.431 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:33.431 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.431 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.431 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.690 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:11:33.690 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:11:34.258 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.258 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:34.258 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.258 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.258 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.258 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:34.258 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:34.258 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:34.826 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:11:34.826 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:34.826 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:34.826 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:34.826 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:34.826 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.826 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.826 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.826 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.826 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.826 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.826 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.826 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.084 00:11:35.085 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:35.085 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.085 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:35.343 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.343 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.343 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.343 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.343 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.343 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:35.343 { 00:11:35.343 "cntlid": 101, 00:11:35.343 "qid": 0, 00:11:35.343 "state": "enabled", 00:11:35.343 "thread": "nvmf_tgt_poll_group_000", 00:11:35.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:35.343 "listen_address": { 00:11:35.343 "trtype": "TCP", 00:11:35.343 "adrfam": "IPv4", 00:11:35.343 "traddr": "10.0.0.3", 00:11:35.343 "trsvcid": "4420" 00:11:35.343 }, 00:11:35.343 "peer_address": { 00:11:35.343 "trtype": "TCP", 00:11:35.343 "adrfam": "IPv4", 00:11:35.343 "traddr": "10.0.0.1", 00:11:35.343 "trsvcid": "45266" 00:11:35.343 }, 00:11:35.343 "auth": { 00:11:35.343 "state": "completed", 00:11:35.343 "digest": "sha512", 00:11:35.343 "dhgroup": "null" 00:11:35.343 } 00:11:35.343 } 00:11:35.343 ]' 00:11:35.343 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:35.343 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:35.343 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:35.343 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:35.343 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:35.602 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.602 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.602 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.860 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:11:35.860 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:11:36.428 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.428 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:36.428 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.428 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.428 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.428 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:36.428 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:36.428 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:36.994 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:11:36.994 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:36.994 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:36.994 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:36.994 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:36.994 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.994 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key3 00:11:36.994 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.994 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.994 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.994 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:36.994 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:36.994 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:37.253 00:11:37.253 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:37.253 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:37.253 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.513 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.513 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.513 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.513 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.513 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.513 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:37.513 { 00:11:37.513 "cntlid": 103, 00:11:37.513 "qid": 0, 00:11:37.513 "state": "enabled", 00:11:37.513 "thread": "nvmf_tgt_poll_group_000", 00:11:37.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:37.513 "listen_address": { 00:11:37.513 "trtype": "TCP", 00:11:37.513 "adrfam": "IPv4", 00:11:37.513 "traddr": "10.0.0.3", 00:11:37.513 "trsvcid": "4420" 00:11:37.513 }, 00:11:37.513 "peer_address": { 00:11:37.513 "trtype": "TCP", 00:11:37.513 "adrfam": "IPv4", 00:11:37.513 "traddr": "10.0.0.1", 00:11:37.513 "trsvcid": "45296" 00:11:37.513 }, 00:11:37.513 "auth": { 00:11:37.513 "state": "completed", 00:11:37.513 "digest": "sha512", 00:11:37.513 "dhgroup": "null" 00:11:37.513 } 00:11:37.513 } 00:11:37.513 ]' 00:11:37.513 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:37.513 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:37.513 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:37.771 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:37.771 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:37.771 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.771 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.771 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.030 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:11:38.030 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:11:38.597 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.597 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:38.597 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.597 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.597 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.597 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:38.597 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:38.597 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:38.597 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:38.855 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:11:38.855 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:38.855 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:38.855 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:38.855 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:38.855 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.855 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.855 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.855 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.855 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.855 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.855 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.855 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.421 00:11:39.421 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:39.421 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.421 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:39.722 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.722 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.722 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.722 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.722 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.722 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:39.722 { 00:11:39.722 "cntlid": 105, 00:11:39.722 "qid": 0, 00:11:39.722 "state": "enabled", 00:11:39.722 "thread": "nvmf_tgt_poll_group_000", 00:11:39.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:39.722 "listen_address": { 00:11:39.722 "trtype": "TCP", 00:11:39.722 "adrfam": "IPv4", 00:11:39.722 "traddr": "10.0.0.3", 00:11:39.723 "trsvcid": "4420" 00:11:39.723 }, 00:11:39.723 "peer_address": { 00:11:39.723 "trtype": "TCP", 00:11:39.723 "adrfam": "IPv4", 00:11:39.723 "traddr": "10.0.0.1", 00:11:39.723 "trsvcid": "49594" 00:11:39.723 }, 00:11:39.723 "auth": { 00:11:39.723 "state": "completed", 00:11:39.723 "digest": "sha512", 00:11:39.723 "dhgroup": "ffdhe2048" 00:11:39.723 } 00:11:39.723 } 00:11:39.723 ]' 00:11:39.723 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:39.723 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:39.723 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:39.723 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:39.723 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:39.723 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.723 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.723 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.981 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:11:39.981 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:11:40.917 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.917 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:40.917 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.917 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.917 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.917 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:40.917 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:40.917 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:41.176 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:11:41.176 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:41.176 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:41.176 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:41.176 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:41.176 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.176 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.176 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.176 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.176 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.176 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.176 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.176 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.437 00:11:41.437 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:41.437 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:41.437 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.696 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.696 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.697 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.697 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.697 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.697 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:41.697 { 00:11:41.697 "cntlid": 107, 00:11:41.697 "qid": 0, 00:11:41.697 "state": "enabled", 00:11:41.697 "thread": "nvmf_tgt_poll_group_000", 00:11:41.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:41.697 "listen_address": { 00:11:41.697 "trtype": "TCP", 00:11:41.697 "adrfam": "IPv4", 00:11:41.697 "traddr": "10.0.0.3", 00:11:41.697 "trsvcid": "4420" 00:11:41.697 }, 00:11:41.697 "peer_address": { 00:11:41.697 "trtype": "TCP", 00:11:41.697 "adrfam": "IPv4", 00:11:41.697 "traddr": "10.0.0.1", 00:11:41.697 "trsvcid": "49610" 00:11:41.697 }, 00:11:41.697 "auth": { 00:11:41.697 "state": "completed", 00:11:41.697 "digest": "sha512", 00:11:41.697 "dhgroup": "ffdhe2048" 00:11:41.697 } 00:11:41.697 } 00:11:41.697 ]' 00:11:41.697 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:41.955 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:41.955 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:41.955 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:41.955 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:41.955 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.955 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.955 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.214 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:11:42.214 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:11:43.148 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.148 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:43.148 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.148 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.148 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.148 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.148 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:43.148 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:43.148 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:11:43.148 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.148 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:43.148 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:43.148 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:43.148 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.148 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:43.148 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.148 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.148 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.148 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:43.148 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:43.148 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:43.714 00:11:43.714 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:43.714 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:43.714 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.973 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.973 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.973 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.973 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.973 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.973 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:43.973 { 00:11:43.973 "cntlid": 109, 00:11:43.973 "qid": 0, 00:11:43.973 "state": "enabled", 00:11:43.973 "thread": "nvmf_tgt_poll_group_000", 00:11:43.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:43.973 "listen_address": { 00:11:43.973 "trtype": "TCP", 00:11:43.973 "adrfam": "IPv4", 00:11:43.973 "traddr": "10.0.0.3", 00:11:43.973 "trsvcid": "4420" 00:11:43.973 }, 00:11:43.973 "peer_address": { 00:11:43.973 "trtype": "TCP", 00:11:43.973 "adrfam": "IPv4", 00:11:43.973 "traddr": "10.0.0.1", 00:11:43.973 "trsvcid": "49642" 00:11:43.973 }, 00:11:43.973 "auth": { 00:11:43.973 "state": "completed", 00:11:43.973 "digest": "sha512", 00:11:43.973 "dhgroup": "ffdhe2048" 00:11:43.973 } 00:11:43.973 } 00:11:43.973 ]' 00:11:43.973 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:43.973 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:43.973 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:43.973 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:43.973 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:43.973 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.973 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.973 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.538 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:11:44.538 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:11:45.106 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.106 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:45.106 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.106 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.106 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.106 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.106 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:45.106 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:45.365 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:11:45.365 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:45.365 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:45.365 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:45.365 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:45.365 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.365 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key3 00:11:45.365 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.365 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.365 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.365 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:45.365 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:45.365 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:45.624 00:11:45.624 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:45.624 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.624 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:45.883 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.883 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.883 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.883 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.883 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.883 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:45.883 { 00:11:45.883 "cntlid": 111, 00:11:45.883 "qid": 0, 00:11:45.883 "state": "enabled", 00:11:45.883 "thread": "nvmf_tgt_poll_group_000", 00:11:45.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:45.883 "listen_address": { 00:11:45.883 "trtype": "TCP", 00:11:45.883 "adrfam": "IPv4", 00:11:45.883 "traddr": "10.0.0.3", 00:11:45.883 "trsvcid": "4420" 00:11:45.883 }, 00:11:45.883 "peer_address": { 00:11:45.883 "trtype": "TCP", 00:11:45.883 "adrfam": "IPv4", 00:11:45.883 "traddr": "10.0.0.1", 00:11:45.883 "trsvcid": "49662" 00:11:45.883 }, 00:11:45.883 "auth": { 00:11:45.883 "state": "completed", 00:11:45.883 "digest": "sha512", 00:11:45.883 "dhgroup": "ffdhe2048" 00:11:45.883 } 00:11:45.883 } 00:11:45.883 ]' 00:11:45.883 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:46.141 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:46.141 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:46.141 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:46.141 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.141 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.141 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.141 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.400 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:11:46.400 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:11:47.336 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.336 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:47.336 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.336 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.336 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.336 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:47.336 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.336 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:47.336 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:47.336 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:11:47.336 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:47.336 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:47.336 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:47.337 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:47.337 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.337 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.337 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.337 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.337 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.337 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.337 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.337 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:48.005 00:11:48.005 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.005 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:48.005 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.005 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.005 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.005 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.005 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.005 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.005 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:48.005 { 00:11:48.005 "cntlid": 113, 00:11:48.005 "qid": 0, 00:11:48.005 "state": "enabled", 00:11:48.005 "thread": "nvmf_tgt_poll_group_000", 00:11:48.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:48.005 "listen_address": { 00:11:48.005 "trtype": "TCP", 00:11:48.005 "adrfam": "IPv4", 00:11:48.005 "traddr": "10.0.0.3", 00:11:48.005 "trsvcid": "4420" 00:11:48.005 }, 00:11:48.005 "peer_address": { 00:11:48.005 "trtype": "TCP", 00:11:48.005 "adrfam": "IPv4", 00:11:48.005 "traddr": "10.0.0.1", 00:11:48.005 "trsvcid": "49698" 00:11:48.005 }, 00:11:48.005 "auth": { 00:11:48.005 "state": "completed", 00:11:48.005 "digest": "sha512", 00:11:48.005 "dhgroup": "ffdhe3072" 00:11:48.005 } 00:11:48.005 } 00:11:48.005 ]' 00:11:48.005 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:48.276 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:48.276 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:48.276 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:48.276 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:48.276 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.276 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.276 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.535 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:11:48.535 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:11:49.470 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.470 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:49.470 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.470 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.470 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.470 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:49.470 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:49.470 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:49.470 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:11:49.470 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:49.470 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:49.470 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:49.470 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:49.470 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.470 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:49.470 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.470 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.470 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.470 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:49.471 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:49.471 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:50.038 00:11:50.038 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.038 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.038 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.295 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.295 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.295 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.295 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.295 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.295 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:50.295 { 00:11:50.295 "cntlid": 115, 00:11:50.295 "qid": 0, 00:11:50.295 "state": "enabled", 00:11:50.295 "thread": "nvmf_tgt_poll_group_000", 00:11:50.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:50.295 "listen_address": { 00:11:50.295 "trtype": "TCP", 00:11:50.295 "adrfam": "IPv4", 00:11:50.295 "traddr": "10.0.0.3", 00:11:50.295 "trsvcid": "4420" 00:11:50.295 }, 00:11:50.295 "peer_address": { 00:11:50.295 "trtype": "TCP", 00:11:50.295 "adrfam": "IPv4", 00:11:50.295 "traddr": "10.0.0.1", 00:11:50.295 "trsvcid": "37330" 00:11:50.295 }, 00:11:50.295 "auth": { 00:11:50.295 "state": "completed", 00:11:50.295 "digest": "sha512", 00:11:50.295 "dhgroup": "ffdhe3072" 00:11:50.295 } 00:11:50.295 } 00:11:50.295 ]' 00:11:50.295 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:50.295 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:50.295 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:50.295 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:50.295 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:50.295 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.295 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.295 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.553 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:11:50.553 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:11:51.488 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.488 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:51.488 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.488 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.488 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.488 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:51.488 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:51.488 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:51.748 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:11:51.748 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:51.748 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:51.748 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:51.748 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:51.748 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.748 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.748 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.748 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.748 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.748 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.748 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.748 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:52.007 00:11:52.007 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:52.007 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:52.007 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.265 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.265 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.265 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.265 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.265 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.265 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:52.265 { 00:11:52.265 "cntlid": 117, 00:11:52.265 "qid": 0, 00:11:52.265 "state": "enabled", 00:11:52.265 "thread": "nvmf_tgt_poll_group_000", 00:11:52.265 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:52.265 "listen_address": { 00:11:52.265 "trtype": "TCP", 00:11:52.265 "adrfam": "IPv4", 00:11:52.265 "traddr": "10.0.0.3", 00:11:52.265 "trsvcid": "4420" 00:11:52.265 }, 00:11:52.265 "peer_address": { 00:11:52.265 "trtype": "TCP", 00:11:52.265 "adrfam": "IPv4", 00:11:52.265 "traddr": "10.0.0.1", 00:11:52.265 "trsvcid": "37372" 00:11:52.265 }, 00:11:52.265 "auth": { 00:11:52.265 "state": "completed", 00:11:52.265 "digest": "sha512", 00:11:52.265 "dhgroup": "ffdhe3072" 00:11:52.265 } 00:11:52.265 } 00:11:52.265 ]' 00:11:52.265 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:52.265 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:52.265 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:52.524 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:52.524 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:52.524 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.524 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.524 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.781 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:11:52.781 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:11:53.352 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.352 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:53.352 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.352 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.352 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.352 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:53.352 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:53.352 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:53.610 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:11:53.610 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:53.610 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:53.610 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:53.610 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:53.610 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.610 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key3 00:11:53.610 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.610 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.868 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.868 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:53.868 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:53.868 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:54.126 00:11:54.126 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.126 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.126 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:54.384 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.384 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.384 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.384 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.384 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.384 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:54.384 { 00:11:54.384 "cntlid": 119, 00:11:54.384 "qid": 0, 00:11:54.384 "state": "enabled", 00:11:54.384 "thread": "nvmf_tgt_poll_group_000", 00:11:54.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:54.384 "listen_address": { 00:11:54.384 "trtype": "TCP", 00:11:54.384 "adrfam": "IPv4", 00:11:54.384 "traddr": "10.0.0.3", 00:11:54.384 "trsvcid": "4420" 00:11:54.384 }, 00:11:54.384 "peer_address": { 00:11:54.384 "trtype": "TCP", 00:11:54.384 "adrfam": "IPv4", 00:11:54.384 "traddr": "10.0.0.1", 00:11:54.384 "trsvcid": "37388" 00:11:54.384 }, 00:11:54.384 "auth": { 00:11:54.384 "state": "completed", 00:11:54.384 "digest": "sha512", 00:11:54.384 "dhgroup": "ffdhe3072" 00:11:54.384 } 00:11:54.384 } 00:11:54.384 ]' 00:11:54.384 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:54.384 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:54.641 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:54.641 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:54.641 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:54.641 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.642 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.642 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.899 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:11:54.899 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:11:55.466 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.466 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:55.466 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.466 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.466 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.466 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:55.466 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:55.466 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:55.466 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:55.726 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:11:55.726 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:55.726 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:55.726 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:55.726 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:55.726 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.726 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.726 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.726 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.984 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.984 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.984 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.984 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.243 00:11:56.243 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:56.243 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:56.243 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.502 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.502 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.502 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.502 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.502 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.502 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:56.502 { 00:11:56.502 "cntlid": 121, 00:11:56.502 "qid": 0, 00:11:56.502 "state": "enabled", 00:11:56.502 "thread": "nvmf_tgt_poll_group_000", 00:11:56.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:56.502 "listen_address": { 00:11:56.502 "trtype": "TCP", 00:11:56.502 "adrfam": "IPv4", 00:11:56.502 "traddr": "10.0.0.3", 00:11:56.502 "trsvcid": "4420" 00:11:56.502 }, 00:11:56.502 "peer_address": { 00:11:56.502 "trtype": "TCP", 00:11:56.502 "adrfam": "IPv4", 00:11:56.502 "traddr": "10.0.0.1", 00:11:56.502 "trsvcid": "37414" 00:11:56.502 }, 00:11:56.502 "auth": { 00:11:56.502 "state": "completed", 00:11:56.502 "digest": "sha512", 00:11:56.502 "dhgroup": "ffdhe4096" 00:11:56.502 } 00:11:56.502 } 00:11:56.502 ]' 00:11:56.502 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:56.761 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:56.761 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:56.761 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:56.761 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:56.761 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.761 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.761 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.020 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:11:57.020 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:11:57.586 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.845 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:57.845 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.845 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.845 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.845 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:57.845 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:57.845 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:58.103 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:11:58.103 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:58.103 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:58.103 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:58.103 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:58.103 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.103 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.103 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.103 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.103 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.103 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.103 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.104 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.362 00:11:58.362 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:58.362 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:58.362 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.621 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.621 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.621 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.621 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.621 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.621 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:58.621 { 00:11:58.621 "cntlid": 123, 00:11:58.621 "qid": 0, 00:11:58.621 "state": "enabled", 00:11:58.621 "thread": "nvmf_tgt_poll_group_000", 00:11:58.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:11:58.621 "listen_address": { 00:11:58.621 "trtype": "TCP", 00:11:58.621 "adrfam": "IPv4", 00:11:58.621 "traddr": "10.0.0.3", 00:11:58.621 "trsvcid": "4420" 00:11:58.621 }, 00:11:58.621 "peer_address": { 00:11:58.621 "trtype": "TCP", 00:11:58.621 "adrfam": "IPv4", 00:11:58.621 "traddr": "10.0.0.1", 00:11:58.621 "trsvcid": "40488" 00:11:58.621 }, 00:11:58.621 "auth": { 00:11:58.621 "state": "completed", 00:11:58.621 "digest": "sha512", 00:11:58.621 "dhgroup": "ffdhe4096" 00:11:58.621 } 00:11:58.621 } 00:11:58.621 ]' 00:11:58.621 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:58.880 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:58.880 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:58.880 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:58.880 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:58.880 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.880 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.880 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.139 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:11:59.139 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:11:59.705 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.705 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:11:59.705 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.705 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.705 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.705 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:59.705 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:59.706 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:59.964 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:11:59.964 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:59.964 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:59.964 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:59.964 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:59.964 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.964 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.964 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.964 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.964 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.964 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.964 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.964 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.532 00:12:00.532 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:00.532 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:00.532 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.791 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.791 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.791 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.791 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.791 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.791 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:00.791 { 00:12:00.791 "cntlid": 125, 00:12:00.791 "qid": 0, 00:12:00.791 "state": "enabled", 00:12:00.791 "thread": "nvmf_tgt_poll_group_000", 00:12:00.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:12:00.791 "listen_address": { 00:12:00.791 "trtype": "TCP", 00:12:00.791 "adrfam": "IPv4", 00:12:00.791 "traddr": "10.0.0.3", 00:12:00.791 "trsvcid": "4420" 00:12:00.791 }, 00:12:00.791 "peer_address": { 00:12:00.791 "trtype": "TCP", 00:12:00.791 "adrfam": "IPv4", 00:12:00.791 "traddr": "10.0.0.1", 00:12:00.791 "trsvcid": "40510" 00:12:00.791 }, 00:12:00.791 "auth": { 00:12:00.791 "state": "completed", 00:12:00.791 "digest": "sha512", 00:12:00.791 "dhgroup": "ffdhe4096" 00:12:00.791 } 00:12:00.791 } 00:12:00.791 ]' 00:12:00.791 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:00.791 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:00.791 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:00.791 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:00.791 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:00.791 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.791 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.791 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.358 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:12:01.358 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:12:01.925 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.925 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:12:01.925 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.925 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.925 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.925 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:01.925 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:01.925 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:02.183 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:12:02.183 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:02.183 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:02.183 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:02.183 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:02.183 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.183 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key3 00:12:02.183 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.183 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.183 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.183 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:02.183 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:02.183 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:02.749 00:12:02.749 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:02.749 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.749 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:03.008 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.008 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.008 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.008 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.008 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.008 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:03.008 { 00:12:03.008 "cntlid": 127, 00:12:03.008 "qid": 0, 00:12:03.008 "state": "enabled", 00:12:03.008 "thread": "nvmf_tgt_poll_group_000", 00:12:03.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:12:03.008 "listen_address": { 00:12:03.008 "trtype": "TCP", 00:12:03.008 "adrfam": "IPv4", 00:12:03.008 "traddr": "10.0.0.3", 00:12:03.008 "trsvcid": "4420" 00:12:03.008 }, 00:12:03.008 "peer_address": { 00:12:03.008 "trtype": "TCP", 00:12:03.008 "adrfam": "IPv4", 00:12:03.008 "traddr": "10.0.0.1", 00:12:03.008 "trsvcid": "40538" 00:12:03.008 }, 00:12:03.008 "auth": { 00:12:03.008 "state": "completed", 00:12:03.008 "digest": "sha512", 00:12:03.008 "dhgroup": "ffdhe4096" 00:12:03.008 } 00:12:03.008 } 00:12:03.008 ]' 00:12:03.008 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:03.008 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:03.008 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:03.266 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:03.266 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:03.266 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.266 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.266 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.524 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:12:03.524 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:12:04.195 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.195 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:12:04.196 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.196 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.196 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.196 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:04.196 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:04.196 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:04.196 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:04.454 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:12:04.454 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:04.454 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:04.454 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:04.454 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:04.454 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.454 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.454 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.454 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.454 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.454 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.454 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.454 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:05.022 00:12:05.022 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:05.022 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.022 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:05.281 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.281 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.281 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.281 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.281 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.281 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:05.281 { 00:12:05.281 "cntlid": 129, 00:12:05.281 "qid": 0, 00:12:05.281 "state": "enabled", 00:12:05.281 "thread": "nvmf_tgt_poll_group_000", 00:12:05.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:12:05.281 "listen_address": { 00:12:05.281 "trtype": "TCP", 00:12:05.281 "adrfam": "IPv4", 00:12:05.281 "traddr": "10.0.0.3", 00:12:05.281 "trsvcid": "4420" 00:12:05.281 }, 00:12:05.281 "peer_address": { 00:12:05.281 "trtype": "TCP", 00:12:05.281 "adrfam": "IPv4", 00:12:05.281 "traddr": "10.0.0.1", 00:12:05.281 "trsvcid": "40566" 00:12:05.281 }, 00:12:05.281 "auth": { 00:12:05.281 "state": "completed", 00:12:05.281 "digest": "sha512", 00:12:05.281 "dhgroup": "ffdhe6144" 00:12:05.281 } 00:12:05.281 } 00:12:05.281 ]' 00:12:05.281 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:05.281 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:05.281 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:05.281 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:05.281 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:05.539 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.539 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.539 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.799 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:12:05.799 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:12:06.364 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.364 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:12:06.364 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.365 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.623 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.623 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:06.623 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:06.623 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:06.880 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:12:06.880 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:06.880 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:06.881 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:06.881 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:06.881 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.881 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.881 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.881 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.881 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.881 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.881 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.881 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:07.502 00:12:07.502 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:07.502 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:07.502 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.770 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.770 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.770 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.770 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.770 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.770 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:07.770 { 00:12:07.770 "cntlid": 131, 00:12:07.770 "qid": 0, 00:12:07.770 "state": "enabled", 00:12:07.770 "thread": "nvmf_tgt_poll_group_000", 00:12:07.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:12:07.770 "listen_address": { 00:12:07.770 "trtype": "TCP", 00:12:07.770 "adrfam": "IPv4", 00:12:07.770 "traddr": "10.0.0.3", 00:12:07.770 "trsvcid": "4420" 00:12:07.770 }, 00:12:07.770 "peer_address": { 00:12:07.770 "trtype": "TCP", 00:12:07.770 "adrfam": "IPv4", 00:12:07.770 "traddr": "10.0.0.1", 00:12:07.770 "trsvcid": "40574" 00:12:07.770 }, 00:12:07.770 "auth": { 00:12:07.770 "state": "completed", 00:12:07.770 "digest": "sha512", 00:12:07.770 "dhgroup": "ffdhe6144" 00:12:07.770 } 00:12:07.770 } 00:12:07.770 ]' 00:12:07.770 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:07.770 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:07.770 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:07.770 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:07.770 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:07.770 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.770 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.770 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.027 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:12:08.027 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:12:08.960 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.960 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:12:08.960 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.960 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.960 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.960 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:08.960 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:08.960 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:08.960 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:12:08.960 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:08.960 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:08.960 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:08.960 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:08.960 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.961 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.961 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.961 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.961 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.961 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.961 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.961 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:09.525 00:12:09.525 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:09.525 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:09.525 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.783 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.783 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.783 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.783 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.783 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.783 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:09.783 { 00:12:09.783 "cntlid": 133, 00:12:09.783 "qid": 0, 00:12:09.783 "state": "enabled", 00:12:09.784 "thread": "nvmf_tgt_poll_group_000", 00:12:09.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:12:09.784 "listen_address": { 00:12:09.784 "trtype": "TCP", 00:12:09.784 "adrfam": "IPv4", 00:12:09.784 "traddr": "10.0.0.3", 00:12:09.784 "trsvcid": "4420" 00:12:09.784 }, 00:12:09.784 "peer_address": { 00:12:09.784 "trtype": "TCP", 00:12:09.784 "adrfam": "IPv4", 00:12:09.784 "traddr": "10.0.0.1", 00:12:09.784 "trsvcid": "59564" 00:12:09.784 }, 00:12:09.784 "auth": { 00:12:09.784 "state": "completed", 00:12:09.784 "digest": "sha512", 00:12:09.784 "dhgroup": "ffdhe6144" 00:12:09.784 } 00:12:09.784 } 00:12:09.784 ]' 00:12:09.784 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:09.784 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:09.784 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:09.784 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:09.784 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.042 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.042 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.042 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.300 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:12:10.300 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:12:10.867 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.124 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:12:11.124 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.124 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.124 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.124 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.124 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:11.124 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:11.433 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:12:11.433 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.433 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:11.433 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:11.433 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:11.433 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.433 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key3 00:12:11.433 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.433 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.433 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.433 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:11.434 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:11.434 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:11.692 00:12:11.951 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:11.951 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:11.951 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.209 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.209 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.209 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.209 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.210 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.210 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.210 { 00:12:12.210 "cntlid": 135, 00:12:12.210 "qid": 0, 00:12:12.210 "state": "enabled", 00:12:12.210 "thread": "nvmf_tgt_poll_group_000", 00:12:12.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:12:12.210 "listen_address": { 00:12:12.210 "trtype": "TCP", 00:12:12.210 "adrfam": "IPv4", 00:12:12.210 "traddr": "10.0.0.3", 00:12:12.210 "trsvcid": "4420" 00:12:12.210 }, 00:12:12.210 "peer_address": { 00:12:12.210 "trtype": "TCP", 00:12:12.210 "adrfam": "IPv4", 00:12:12.210 "traddr": "10.0.0.1", 00:12:12.210 "trsvcid": "59600" 00:12:12.210 }, 00:12:12.210 "auth": { 00:12:12.210 "state": "completed", 00:12:12.210 "digest": "sha512", 00:12:12.210 "dhgroup": "ffdhe6144" 00:12:12.210 } 00:12:12.210 } 00:12:12.210 ]' 00:12:12.210 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.210 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:12.210 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.210 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:12.210 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.470 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.470 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.470 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.729 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:12:12.729 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:12:13.297 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.297 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:12:13.297 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.297 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.297 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.297 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:13.297 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.297 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:13.297 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:13.557 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:12:13.557 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:13.557 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:13.557 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:13.557 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:13.557 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.557 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.557 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.557 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.557 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.557 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.557 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.557 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:14.124 00:12:14.382 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.382 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.382 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.640 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.640 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.640 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.640 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.640 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.640 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.640 { 00:12:14.640 "cntlid": 137, 00:12:14.640 "qid": 0, 00:12:14.640 "state": "enabled", 00:12:14.640 "thread": "nvmf_tgt_poll_group_000", 00:12:14.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:12:14.640 "listen_address": { 00:12:14.640 "trtype": "TCP", 00:12:14.640 "adrfam": "IPv4", 00:12:14.640 "traddr": "10.0.0.3", 00:12:14.640 "trsvcid": "4420" 00:12:14.640 }, 00:12:14.640 "peer_address": { 00:12:14.640 "trtype": "TCP", 00:12:14.640 "adrfam": "IPv4", 00:12:14.640 "traddr": "10.0.0.1", 00:12:14.640 "trsvcid": "59626" 00:12:14.640 }, 00:12:14.640 "auth": { 00:12:14.640 "state": "completed", 00:12:14.640 "digest": "sha512", 00:12:14.640 "dhgroup": "ffdhe8192" 00:12:14.640 } 00:12:14.640 } 00:12:14.640 ]' 00:12:14.640 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.640 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:14.640 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.640 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:14.640 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.640 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.640 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.640 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.898 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:12:14.898 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:12:15.833 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.833 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:12:15.833 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.833 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.833 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.833 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:15.833 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:15.833 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:16.092 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:12:16.092 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:16.092 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:16.092 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:16.092 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:16.092 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.092 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:16.092 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.092 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.092 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.092 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:16.092 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:16.092 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:16.658 00:12:16.658 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.658 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.658 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.917 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.917 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.917 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.917 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.175 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.175 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:17.175 { 00:12:17.175 "cntlid": 139, 00:12:17.175 "qid": 0, 00:12:17.175 "state": "enabled", 00:12:17.175 "thread": "nvmf_tgt_poll_group_000", 00:12:17.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:12:17.175 "listen_address": { 00:12:17.175 "trtype": "TCP", 00:12:17.175 "adrfam": "IPv4", 00:12:17.175 "traddr": "10.0.0.3", 00:12:17.175 "trsvcid": "4420" 00:12:17.175 }, 00:12:17.175 "peer_address": { 00:12:17.175 "trtype": "TCP", 00:12:17.175 "adrfam": "IPv4", 00:12:17.175 "traddr": "10.0.0.1", 00:12:17.175 "trsvcid": "59644" 00:12:17.175 }, 00:12:17.175 "auth": { 00:12:17.175 "state": "completed", 00:12:17.175 "digest": "sha512", 00:12:17.175 "dhgroup": "ffdhe8192" 00:12:17.175 } 00:12:17.175 } 00:12:17.175 ]' 00:12:17.175 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:17.175 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:17.175 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:17.175 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:17.175 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:17.176 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.176 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.176 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.435 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:12:17.435 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: --dhchap-ctrl-secret DHHC-1:02:ZTdjNmMzZjkyYWU2ZjAzZjJiM2YxZTJkY2Y5YjQ0NmVhMGQ2NTkwYzY4MzU3Y2IznXKW0Q==: 00:12:18.370 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.370 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:12:18.370 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.370 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.370 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.370 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:18.370 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:18.370 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:18.629 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:12:18.629 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:18.629 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:18.629 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:18.629 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:18.629 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.629 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.629 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.629 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.629 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.629 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.629 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.629 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.195 00:12:19.195 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:19.195 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:19.195 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.486 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.486 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.486 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.486 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.486 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.486 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:19.486 { 00:12:19.486 "cntlid": 141, 00:12:19.486 "qid": 0, 00:12:19.486 "state": "enabled", 00:12:19.486 "thread": "nvmf_tgt_poll_group_000", 00:12:19.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:12:19.486 "listen_address": { 00:12:19.486 "trtype": "TCP", 00:12:19.486 "adrfam": "IPv4", 00:12:19.486 "traddr": "10.0.0.3", 00:12:19.486 "trsvcid": "4420" 00:12:19.486 }, 00:12:19.486 "peer_address": { 00:12:19.486 "trtype": "TCP", 00:12:19.486 "adrfam": "IPv4", 00:12:19.486 "traddr": "10.0.0.1", 00:12:19.486 "trsvcid": "57314" 00:12:19.486 }, 00:12:19.486 "auth": { 00:12:19.486 "state": "completed", 00:12:19.486 "digest": "sha512", 00:12:19.486 "dhgroup": "ffdhe8192" 00:12:19.486 } 00:12:19.486 } 00:12:19.486 ]' 00:12:19.486 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:19.486 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:19.486 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:19.745 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:19.745 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:19.745 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.745 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.745 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.004 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:12:20.004 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:01:Y2NhYTk1MTI5NzM0ZDczMTdlMjdmZTRjZGJhMzFhMjLXxxJa: 00:12:20.571 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.571 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:12:20.571 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.571 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.571 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.571 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:20.571 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:20.571 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:21.138 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:12:21.138 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:21.138 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:21.138 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:21.138 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:21.138 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.138 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key3 00:12:21.138 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.138 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.138 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.138 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:21.138 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:21.138 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:21.705 00:12:21.705 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:21.705 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:21.705 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.963 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.963 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.963 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.963 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.963 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.963 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:21.963 { 00:12:21.963 "cntlid": 143, 00:12:21.963 "qid": 0, 00:12:21.963 "state": "enabled", 00:12:21.963 "thread": "nvmf_tgt_poll_group_000", 00:12:21.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:12:21.963 "listen_address": { 00:12:21.963 "trtype": "TCP", 00:12:21.963 "adrfam": "IPv4", 00:12:21.963 "traddr": "10.0.0.3", 00:12:21.963 "trsvcid": "4420" 00:12:21.963 }, 00:12:21.963 "peer_address": { 00:12:21.963 "trtype": "TCP", 00:12:21.963 "adrfam": "IPv4", 00:12:21.963 "traddr": "10.0.0.1", 00:12:21.963 "trsvcid": "57338" 00:12:21.963 }, 00:12:21.963 "auth": { 00:12:21.963 "state": "completed", 00:12:21.963 "digest": "sha512", 00:12:21.963 "dhgroup": "ffdhe8192" 00:12:21.963 } 00:12:21.963 } 00:12:21.963 ]' 00:12:21.963 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:21.963 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:21.963 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:21.963 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:21.963 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:22.221 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.221 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.221 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.479 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:12:22.479 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:12:23.046 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.046 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:12:23.046 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.046 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.046 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.046 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:23.046 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:12:23.046 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:23.046 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:23.046 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:23.046 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:23.305 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:12:23.305 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:23.305 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:23.305 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:23.305 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:23.305 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.305 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.305 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.305 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.630 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.630 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.630 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.630 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.197 00:12:24.197 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:24.197 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.197 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:24.456 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.456 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.456 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.456 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.456 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.456 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:24.456 { 00:12:24.456 "cntlid": 145, 00:12:24.456 "qid": 0, 00:12:24.456 "state": "enabled", 00:12:24.456 "thread": "nvmf_tgt_poll_group_000", 00:12:24.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:12:24.456 "listen_address": { 00:12:24.456 "trtype": "TCP", 00:12:24.456 "adrfam": "IPv4", 00:12:24.456 "traddr": "10.0.0.3", 00:12:24.456 "trsvcid": "4420" 00:12:24.456 }, 00:12:24.456 "peer_address": { 00:12:24.456 "trtype": "TCP", 00:12:24.456 "adrfam": "IPv4", 00:12:24.456 "traddr": "10.0.0.1", 00:12:24.456 "trsvcid": "57364" 00:12:24.456 }, 00:12:24.456 "auth": { 00:12:24.456 "state": "completed", 00:12:24.456 "digest": "sha512", 00:12:24.456 "dhgroup": "ffdhe8192" 00:12:24.456 } 00:12:24.456 } 00:12:24.456 ]' 00:12:24.456 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:24.456 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:24.456 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:24.456 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:24.456 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:24.456 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.456 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.456 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.714 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:12:24.714 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:00:MmIxYWZiMDc2MTdkY2M3MGE5NjAyYzYyMmJiNDIyZjdiMmQ2ZWU5YTFjNmI2ZWFixR7HGA==: --dhchap-ctrl-secret DHHC-1:03:MTU5NmZjNDE3MDBkZTFhYmVkNGVkNTE2Mjg4M2UyMjAyOTdmZDAwNTk0OGUyZmVhZDNkM2FlN2JlMmExMmFkOYWInV8=: 00:12:25.650 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.650 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:12:25.650 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.650 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.650 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.650 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key1 00:12:25.650 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.650 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.650 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.650 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:12:25.650 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:25.650 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:12:25.650 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:25.650 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:25.650 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:25.650 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:25.650 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:12:25.650 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:25.650 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:26.219 request: 00:12:26.219 { 00:12:26.219 "name": "nvme0", 00:12:26.219 "trtype": "tcp", 00:12:26.219 "traddr": "10.0.0.3", 00:12:26.219 "adrfam": "ipv4", 00:12:26.219 "trsvcid": "4420", 00:12:26.219 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:26.219 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:12:26.219 "prchk_reftag": false, 00:12:26.219 "prchk_guard": false, 00:12:26.219 "hdgst": false, 00:12:26.219 "ddgst": false, 00:12:26.219 "dhchap_key": "key2", 00:12:26.219 "allow_unrecognized_csi": false, 00:12:26.219 "method": "bdev_nvme_attach_controller", 00:12:26.219 "req_id": 1 00:12:26.219 } 00:12:26.219 Got JSON-RPC error response 00:12:26.219 response: 00:12:26.219 { 00:12:26.219 "code": -5, 00:12:26.219 "message": "Input/output error" 00:12:26.219 } 00:12:26.219 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:26.219 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:26.219 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:26.219 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:26.219 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:12:26.219 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.219 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.219 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.219 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.219 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.219 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.219 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.219 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:26.219 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:26.219 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:26.219 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:26.219 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:26.219 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:26.219 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:26.219 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:26.219 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:26.219 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:26.786 request: 00:12:26.786 { 00:12:26.786 "name": "nvme0", 00:12:26.786 "trtype": "tcp", 00:12:26.786 "traddr": "10.0.0.3", 00:12:26.786 "adrfam": "ipv4", 00:12:26.786 "trsvcid": "4420", 00:12:26.786 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:26.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:12:26.786 "prchk_reftag": false, 00:12:26.786 "prchk_guard": false, 00:12:26.786 "hdgst": false, 00:12:26.786 "ddgst": false, 00:12:26.786 "dhchap_key": "key1", 00:12:26.786 "dhchap_ctrlr_key": "ckey2", 00:12:26.786 "allow_unrecognized_csi": false, 00:12:26.786 "method": "bdev_nvme_attach_controller", 00:12:26.786 "req_id": 1 00:12:26.786 } 00:12:26.786 Got JSON-RPC error response 00:12:26.786 response: 00:12:26.786 { 00:12:26.786 "code": -5, 00:12:26.786 "message": "Input/output error" 00:12:26.786 } 00:12:26.786 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:26.786 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:26.786 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:26.786 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:26.786 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:12:26.787 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.787 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.787 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.787 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key1 00:12:26.787 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.787 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.787 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.787 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.787 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:26.787 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.787 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:26.787 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:26.787 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:26.787 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:26.787 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.787 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.787 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.354 request: 00:12:27.354 { 00:12:27.354 "name": "nvme0", 00:12:27.354 "trtype": "tcp", 00:12:27.354 "traddr": "10.0.0.3", 00:12:27.354 "adrfam": "ipv4", 00:12:27.354 "trsvcid": "4420", 00:12:27.354 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:27.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:12:27.354 "prchk_reftag": false, 00:12:27.354 "prchk_guard": false, 00:12:27.354 "hdgst": false, 00:12:27.354 "ddgst": false, 00:12:27.354 "dhchap_key": "key1", 00:12:27.354 "dhchap_ctrlr_key": "ckey1", 00:12:27.354 "allow_unrecognized_csi": false, 00:12:27.354 "method": "bdev_nvme_attach_controller", 00:12:27.354 "req_id": 1 00:12:27.354 } 00:12:27.354 Got JSON-RPC error response 00:12:27.354 response: 00:12:27.354 { 00:12:27.354 "code": -5, 00:12:27.354 "message": "Input/output error" 00:12:27.354 } 00:12:27.354 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:27.354 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:27.354 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:27.354 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:27.354 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:12:27.354 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.354 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.354 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.354 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67255 00:12:27.354 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67255 ']' 00:12:27.354 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67255 00:12:27.354 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:12:27.354 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.354 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67255 00:12:27.354 killing process with pid 67255 00:12:27.354 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.354 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.354 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67255' 00:12:27.354 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67255 00:12:27.354 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67255 00:12:27.738 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:27.738 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:27.738 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:27.738 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.738 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70425 00:12:27.738 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:27.738 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70425 00:12:27.738 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70425 ']' 00:12:27.738 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.738 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.738 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.738 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.738 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.997 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.997 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:27.997 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:27.997 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:27.997 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.997 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.997 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:27.997 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70425 00:12:27.997 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70425 ']' 00:12:27.997 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.997 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.997 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.997 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.997 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.566 null0 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.RH8 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.yRw ]] 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yRw 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.C7z 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.mpE ]] 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mpE 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.BvD 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.A29 ]] 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.A29 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.566 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:28.567 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.4s4 00:12:28.567 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.567 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.567 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.567 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:12:28.567 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:12:28.567 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.567 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:28.567 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:28.567 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:28.567 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.567 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key3 00:12:28.567 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.567 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.567 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.567 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:28.567 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:28.567 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:29.503 nvme0n1 00:12:29.503 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:29.503 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:29.503 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.071 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.071 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.071 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.071 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.071 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.071 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:30.071 { 00:12:30.071 "cntlid": 1, 00:12:30.071 "qid": 0, 00:12:30.071 "state": "enabled", 00:12:30.071 "thread": "nvmf_tgt_poll_group_000", 00:12:30.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:12:30.071 "listen_address": { 00:12:30.071 "trtype": "TCP", 00:12:30.071 "adrfam": "IPv4", 00:12:30.071 "traddr": "10.0.0.3", 00:12:30.071 "trsvcid": "4420" 00:12:30.071 }, 00:12:30.071 "peer_address": { 00:12:30.071 "trtype": "TCP", 00:12:30.071 "adrfam": "IPv4", 00:12:30.071 "traddr": "10.0.0.1", 00:12:30.071 "trsvcid": "44770" 00:12:30.071 }, 00:12:30.071 "auth": { 00:12:30.071 "state": "completed", 00:12:30.071 "digest": "sha512", 00:12:30.071 "dhgroup": "ffdhe8192" 00:12:30.071 } 00:12:30.071 } 00:12:30.071 ]' 00:12:30.071 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:30.071 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:30.071 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:30.071 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:30.071 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:30.071 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.071 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.071 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.330 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:12:30.330 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:12:31.264 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.264 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:12:31.264 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.264 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.264 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.264 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key3 00:12:31.264 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.264 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.264 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.264 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:31.264 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:31.523 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:31.523 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:31.523 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:31.523 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:31.523 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:31.523 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:31.523 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:31.523 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:31.523 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:31.523 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:31.802 request: 00:12:31.802 { 00:12:31.802 "name": "nvme0", 00:12:31.802 "trtype": "tcp", 00:12:31.802 "traddr": "10.0.0.3", 00:12:31.802 "adrfam": "ipv4", 00:12:31.802 "trsvcid": "4420", 00:12:31.802 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:31.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:12:31.802 "prchk_reftag": false, 00:12:31.802 "prchk_guard": false, 00:12:31.802 "hdgst": false, 00:12:31.802 "ddgst": false, 00:12:31.802 "dhchap_key": "key3", 00:12:31.802 "allow_unrecognized_csi": false, 00:12:31.802 "method": "bdev_nvme_attach_controller", 00:12:31.802 "req_id": 1 00:12:31.802 } 00:12:31.802 Got JSON-RPC error response 00:12:31.802 response: 00:12:31.802 { 00:12:31.802 "code": -5, 00:12:31.802 "message": "Input/output error" 00:12:31.802 } 00:12:31.802 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:31.802 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:31.802 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:31.802 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:31.802 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:12:31.802 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:12:31.803 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:31.803 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:32.061 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:32.061 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:32.061 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:32.061 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:32.061 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:32.061 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:32.061 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:32.061 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:32.061 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:32.061 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:32.320 request: 00:12:32.320 { 00:12:32.320 "name": "nvme0", 00:12:32.320 "trtype": "tcp", 00:12:32.320 "traddr": "10.0.0.3", 00:12:32.320 "adrfam": "ipv4", 00:12:32.320 "trsvcid": "4420", 00:12:32.320 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:32.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:12:32.320 "prchk_reftag": false, 00:12:32.320 "prchk_guard": false, 00:12:32.320 "hdgst": false, 00:12:32.320 "ddgst": false, 00:12:32.320 "dhchap_key": "key3", 00:12:32.320 "allow_unrecognized_csi": false, 00:12:32.320 "method": "bdev_nvme_attach_controller", 00:12:32.320 "req_id": 1 00:12:32.320 } 00:12:32.320 Got JSON-RPC error response 00:12:32.320 response: 00:12:32.320 { 00:12:32.320 "code": -5, 00:12:32.320 "message": "Input/output error" 00:12:32.320 } 00:12:32.320 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:32.320 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:32.320 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:32.320 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:32.320 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:32.320 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:12:32.320 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:32.320 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:32.320 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:32.320 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:32.579 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:12:32.579 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.579 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.579 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.579 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:12:32.579 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.579 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.579 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.579 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:32.579 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:32.579 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:32.579 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:32.579 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:32.579 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:32.579 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:32.579 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:32.579 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:32.579 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:33.146 request: 00:12:33.146 { 00:12:33.146 "name": "nvme0", 00:12:33.146 "trtype": "tcp", 00:12:33.146 "traddr": "10.0.0.3", 00:12:33.146 "adrfam": "ipv4", 00:12:33.146 "trsvcid": "4420", 00:12:33.146 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:33.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:12:33.146 "prchk_reftag": false, 00:12:33.146 "prchk_guard": false, 00:12:33.146 "hdgst": false, 00:12:33.146 "ddgst": false, 00:12:33.146 "dhchap_key": "key0", 00:12:33.146 "dhchap_ctrlr_key": "key1", 00:12:33.146 "allow_unrecognized_csi": false, 00:12:33.146 "method": "bdev_nvme_attach_controller", 00:12:33.146 "req_id": 1 00:12:33.146 } 00:12:33.146 Got JSON-RPC error response 00:12:33.146 response: 00:12:33.146 { 00:12:33.146 "code": -5, 00:12:33.146 "message": "Input/output error" 00:12:33.146 } 00:12:33.146 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:33.146 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:33.146 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:33.146 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:33.146 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:12:33.146 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:33.146 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:33.404 nvme0n1 00:12:33.404 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:12:33.404 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.404 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:12:33.970 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.970 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.970 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.228 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key1 00:12:34.228 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.228 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.228 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.228 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:34.228 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:34.229 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:35.165 nvme0n1 00:12:35.165 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:12:35.165 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.165 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:12:35.424 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.424 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:35.424 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.424 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.424 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.424 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:12:35.424 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:12:35.424 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.683 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.683 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:12:35.683 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid 310b31eb-b117-4685-b95a-c58b48fd3835 -l 0 --dhchap-secret DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: --dhchap-ctrl-secret DHHC-1:03:ZGQ0ZmNhZGNjN2U1ZDM0OTg0ZmVhN2U0MmM3ZTU3Mzk1ZDg4NjU1YjVkODc0ZjYxMDVjMmI0MGZkZWQwNTNhMNPbhBg=: 00:12:36.649 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:12:36.649 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:12:36.649 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:12:36.649 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:12:36.649 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:12:36.649 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:12:36.649 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:12:36.649 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.649 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.649 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:12:36.649 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:36.649 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:12:36.649 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:36.649 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:36.649 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:36.649 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:36.649 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:36.649 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:36.649 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:37.586 request: 00:12:37.586 { 00:12:37.586 "name": "nvme0", 00:12:37.586 "trtype": "tcp", 00:12:37.586 "traddr": "10.0.0.3", 00:12:37.586 "adrfam": "ipv4", 00:12:37.586 "trsvcid": "4420", 00:12:37.586 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:37.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835", 00:12:37.586 "prchk_reftag": false, 00:12:37.586 "prchk_guard": false, 00:12:37.586 "hdgst": false, 00:12:37.586 "ddgst": false, 00:12:37.586 "dhchap_key": "key1", 00:12:37.586 "allow_unrecognized_csi": false, 00:12:37.586 "method": "bdev_nvme_attach_controller", 00:12:37.586 "req_id": 1 00:12:37.586 } 00:12:37.586 Got JSON-RPC error response 00:12:37.586 response: 00:12:37.586 { 00:12:37.586 "code": -5, 00:12:37.586 "message": "Input/output error" 00:12:37.586 } 00:12:37.586 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:37.586 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:37.586 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:37.586 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:37.586 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:37.586 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:37.586 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:38.523 nvme0n1 00:12:38.523 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:12:38.523 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:12:38.523 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.781 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.781 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.781 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.041 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:12:39.041 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.041 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.041 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.041 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:12:39.041 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:39.041 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:39.302 nvme0n1 00:12:39.560 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:12:39.560 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.560 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:12:39.819 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.819 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.819 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.079 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:40.079 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.079 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.079 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.079 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: '' 2s 00:12:40.079 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:40.079 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:40.079 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: 00:12:40.079 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:12:40.079 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:40.079 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:40.079 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: ]] 00:12:40.079 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ODg5ZjM2YmYwNGY4ZDQ4NTM3OTdkNTUyOGRlMzJiYzAE0vMV: 00:12:40.079 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:12:40.079 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:40.079 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:41.982 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:12:41.982 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:12:41.982 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:12:41.982 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:12:41.982 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:12:41.982 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:12:41.982 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:12:41.982 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key1 --dhchap-ctrlr-key key2 00:12:41.982 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.982 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.982 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.982 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: 2s 00:12:41.982 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:41.982 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:41.982 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:12:41.983 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: 00:12:41.983 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:41.983 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:41.983 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:12:41.983 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: ]] 00:12:41.983 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Y2U0YTlkMGJlMGZjYTMxZmE0NWY5MWRhYjBkNTFhZWNlMzJlY2NlOTY0YzA0YzZhAHo1yQ==: 00:12:41.983 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:41.983 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:44.516 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:12:44.516 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:12:44.516 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:12:44.516 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:12:44.516 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:12:44.516 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:12:44.516 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:12:44.516 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.516 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:44.516 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.516 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.516 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.516 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:44.516 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:44.516 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:45.084 nvme0n1 00:12:45.084 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:45.084 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.084 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.084 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.084 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:45.085 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:46.023 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:12:46.023 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:12:46.023 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.023 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.023 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:12:46.023 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.023 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.023 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.023 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:12:46.023 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:12:46.290 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:12:46.290 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:12:46.549 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.808 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.808 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:46.808 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.808 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.808 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.808 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:46.808 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:46.808 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:46.808 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:12:46.808 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:46.808 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:12:46.808 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:46.808 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:46.808 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:47.373 request: 00:12:47.373 { 00:12:47.373 "name": "nvme0", 00:12:47.373 "dhchap_key": "key1", 00:12:47.373 "dhchap_ctrlr_key": "key3", 00:12:47.373 "method": "bdev_nvme_set_keys", 00:12:47.373 "req_id": 1 00:12:47.373 } 00:12:47.373 Got JSON-RPC error response 00:12:47.373 response: 00:12:47.373 { 00:12:47.373 "code": -13, 00:12:47.373 "message": "Permission denied" 00:12:47.373 } 00:12:47.373 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:47.373 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:47.373 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:47.373 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:47.373 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:12:47.373 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.373 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:12:47.631 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:12:47.631 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:12:49.003 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:12:49.003 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.003 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:12:49.003 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:12:49.003 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:49.003 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.003 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.003 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.003 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:49.003 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:49.003 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:50.381 nvme0n1 00:12:50.381 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:50.381 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.381 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.381 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.381 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:50.381 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:50.381 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:50.381 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:12:50.381 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:50.381 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:12:50.381 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:50.381 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:50.381 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:50.946 request: 00:12:50.946 { 00:12:50.946 "name": "nvme0", 00:12:50.946 "dhchap_key": "key2", 00:12:50.946 "dhchap_ctrlr_key": "key0", 00:12:50.946 "method": "bdev_nvme_set_keys", 00:12:50.946 "req_id": 1 00:12:50.946 } 00:12:50.946 Got JSON-RPC error response 00:12:50.946 response: 00:12:50.946 { 00:12:50.946 "code": -13, 00:12:50.946 "message": "Permission denied" 00:12:50.946 } 00:12:50.946 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:50.946 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:50.946 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:50.946 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:50.946 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:50.946 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.946 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:51.204 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:12:51.204 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:12:52.138 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:52.138 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:52.138 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.396 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:12:52.396 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:12:52.396 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:12:52.396 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67285 00:12:52.396 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67285 ']' 00:12:52.396 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67285 00:12:52.396 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:12:52.396 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.396 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67285 00:12:52.396 killing process with pid 67285 00:12:52.396 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:52.396 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:52.396 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67285' 00:12:52.396 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67285 00:12:52.396 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67285 00:12:52.963 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:12:52.963 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:52.963 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:12:52.963 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:52.963 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:12:52.963 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:52.963 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:52.963 rmmod nvme_tcp 00:12:52.963 rmmod nvme_fabrics 00:12:52.963 rmmod nvme_keyring 00:12:52.963 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:52.963 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:12:52.963 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:12:52.963 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70425 ']' 00:12:52.963 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70425 00:12:52.963 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70425 ']' 00:12:52.963 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70425 00:12:52.963 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:12:52.963 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.963 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70425 00:12:52.964 killing process with pid 70425 00:12:52.964 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:52.964 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:52.964 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70425' 00:12:52.964 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70425 00:12:52.964 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70425 00:12:53.222 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:53.222 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:53.222 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:53.222 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:12:53.222 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:53.223 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:12:53.223 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:53.223 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:53.223 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:53.223 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:53.223 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:53.223 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:53.223 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:53.223 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:53.223 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:53.223 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:53.223 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:53.223 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:53.223 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:53.223 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:53.223 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:53.223 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:53.223 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:53.223 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.223 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.223 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.223 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:12:53.223 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.RH8 /tmp/spdk.key-sha256.C7z /tmp/spdk.key-sha384.BvD /tmp/spdk.key-sha512.4s4 /tmp/spdk.key-sha512.yRw /tmp/spdk.key-sha384.mpE /tmp/spdk.key-sha256.A29 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:12:53.482 00:12:53.482 real 3m20.938s 00:12:53.482 user 8m2.527s 00:12:53.482 sys 0m30.868s 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.482 ************************************ 00:12:53.482 END TEST nvmf_auth_target 00:12:53.482 ************************************ 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:53.482 ************************************ 00:12:53.482 START TEST nvmf_bdevio_no_huge 00:12:53.482 ************************************ 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:53.482 * Looking for test storage... 00:12:53.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:53.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.482 --rc genhtml_branch_coverage=1 00:12:53.482 --rc genhtml_function_coverage=1 00:12:53.482 --rc genhtml_legend=1 00:12:53.482 --rc geninfo_all_blocks=1 00:12:53.482 --rc geninfo_unexecuted_blocks=1 00:12:53.482 00:12:53.482 ' 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:53.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.482 --rc genhtml_branch_coverage=1 00:12:53.482 --rc genhtml_function_coverage=1 00:12:53.482 --rc genhtml_legend=1 00:12:53.482 --rc geninfo_all_blocks=1 00:12:53.482 --rc geninfo_unexecuted_blocks=1 00:12:53.482 00:12:53.482 ' 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:53.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.482 --rc genhtml_branch_coverage=1 00:12:53.482 --rc genhtml_function_coverage=1 00:12:53.482 --rc genhtml_legend=1 00:12:53.482 --rc geninfo_all_blocks=1 00:12:53.482 --rc geninfo_unexecuted_blocks=1 00:12:53.482 00:12:53.482 ' 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:53.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.482 --rc genhtml_branch_coverage=1 00:12:53.482 --rc genhtml_function_coverage=1 00:12:53.482 --rc genhtml_legend=1 00:12:53.482 --rc geninfo_all_blocks=1 00:12:53.482 --rc geninfo_unexecuted_blocks=1 00:12:53.482 00:12:53.482 ' 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.482 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.483 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.483 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.483 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:12:53.483 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:12:53.483 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.483 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.483 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:53.483 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:53.483 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:53.483 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:53.742 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:53.742 Cannot find device "nvmf_init_br" 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:53.742 Cannot find device "nvmf_init_br2" 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:53.742 Cannot find device "nvmf_tgt_br" 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:53.742 Cannot find device "nvmf_tgt_br2" 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:53.742 Cannot find device "nvmf_init_br" 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:12:53.742 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:53.742 Cannot find device "nvmf_init_br2" 00:12:53.743 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:12:53.743 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:53.743 Cannot find device "nvmf_tgt_br" 00:12:53.743 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:12:53.743 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:53.743 Cannot find device "nvmf_tgt_br2" 00:12:53.743 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:12:53.743 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:53.743 Cannot find device "nvmf_br" 00:12:53.743 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:12:53.743 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:53.743 Cannot find device "nvmf_init_if" 00:12:53.743 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:12:53.743 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:53.743 Cannot find device "nvmf_init_if2" 00:12:53.743 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:12:53.743 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:53.743 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:53.743 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:12:53.743 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:53.743 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:53.743 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:12:53.743 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:53.743 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:53.743 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:53.743 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:53.743 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:53.743 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:53.743 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:53.743 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:53.743 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:53.743 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:53.743 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:53.743 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:53.743 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:53.743 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:54.002 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:54.002 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:12:54.002 00:12:54.002 --- 10.0.0.3 ping statistics --- 00:12:54.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.002 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:54.002 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:54.002 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:12:54.002 00:12:54.002 --- 10.0.0.4 ping statistics --- 00:12:54.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.002 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:54.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:54.002 00:12:54.002 --- 10.0.0.1 ping statistics --- 00:12:54.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.002 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:54.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:12:54.002 00:12:54.002 --- 10.0.0.2 ping statistics --- 00:12:54.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.002 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:54.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=71075 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 71075 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 71075 ']' 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.002 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:54.002 [2024-11-26 20:33:54.315771] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:12:54.002 [2024-11-26 20:33:54.315864] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:54.261 [2024-11-26 20:33:54.468559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.261 [2024-11-26 20:33:54.536501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.261 [2024-11-26 20:33:54.536560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.261 [2024-11-26 20:33:54.536572] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.261 [2024-11-26 20:33:54.536581] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.261 [2024-11-26 20:33:54.536588] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.261 [2024-11-26 20:33:54.537505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:54.261 [2024-11-26 20:33:54.537618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:54.261 [2024-11-26 20:33:54.537739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:54.261 [2024-11-26 20:33:54.537747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.261 [2024-11-26 20:33:54.543154] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:54.529 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.529 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:12:54.529 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:54.529 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:54.529 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:54.529 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.529 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:54.529 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.529 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:54.529 [2024-11-26 20:33:54.727018] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.529 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.529 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:54.529 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.529 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:54.529 Malloc0 00:12:54.530 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.530 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:54.530 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.530 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:54.530 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.530 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:54.530 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.530 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:54.530 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.530 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:54.530 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.530 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:54.530 [2024-11-26 20:33:54.771935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:54.530 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.530 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:54.530 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:54.530 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:12:54.530 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:12:54.530 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:54.531 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:54.531 { 00:12:54.531 "params": { 00:12:54.531 "name": "Nvme$subsystem", 00:12:54.531 "trtype": "$TEST_TRANSPORT", 00:12:54.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:54.531 "adrfam": "ipv4", 00:12:54.531 "trsvcid": "$NVMF_PORT", 00:12:54.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:54.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:54.531 "hdgst": ${hdgst:-false}, 00:12:54.531 "ddgst": ${ddgst:-false} 00:12:54.531 }, 00:12:54.531 "method": "bdev_nvme_attach_controller" 00:12:54.531 } 00:12:54.531 EOF 00:12:54.531 )") 00:12:54.531 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:12:54.531 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:12:54.531 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:12:54.531 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:54.531 "params": { 00:12:54.531 "name": "Nvme1", 00:12:54.531 "trtype": "tcp", 00:12:54.531 "traddr": "10.0.0.3", 00:12:54.531 "adrfam": "ipv4", 00:12:54.531 "trsvcid": "4420", 00:12:54.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:54.531 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:54.531 "hdgst": false, 00:12:54.531 "ddgst": false 00:12:54.531 }, 00:12:54.531 "method": "bdev_nvme_attach_controller" 00:12:54.531 }' 00:12:54.531 [2024-11-26 20:33:54.832714] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:12:54.532 [2024-11-26 20:33:54.832813] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71098 ] 00:12:54.793 [2024-11-26 20:33:54.995118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:54.793 [2024-11-26 20:33:55.075967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.793 [2024-11-26 20:33:55.076099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.793 [2024-11-26 20:33:55.076499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.793 [2024-11-26 20:33:55.090618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:55.052 I/O targets: 00:12:55.052 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:55.052 00:12:55.052 00:12:55.052 CUnit - A unit testing framework for C - Version 2.1-3 00:12:55.052 http://cunit.sourceforge.net/ 00:12:55.052 00:12:55.052 00:12:55.052 Suite: bdevio tests on: Nvme1n1 00:12:55.052 Test: blockdev write read block ...passed 00:12:55.052 Test: blockdev write zeroes read block ...passed 00:12:55.052 Test: blockdev write zeroes read no split ...passed 00:12:55.052 Test: blockdev write zeroes read split ...passed 00:12:55.052 Test: blockdev write zeroes read split partial ...passed 00:12:55.053 Test: blockdev reset ...[2024-11-26 20:33:55.332592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:55.053 [2024-11-26 20:33:55.332801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232f320 (9): Bad file descriptor 00:12:55.053 [2024-11-26 20:33:55.351685] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:55.053 passed 00:12:55.053 Test: blockdev write read 8 blocks ...passed 00:12:55.053 Test: blockdev write read size > 128k ...passed 00:12:55.053 Test: blockdev write read invalid size ...passed 00:12:55.053 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:55.053 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:55.053 Test: blockdev write read max offset ...passed 00:12:55.053 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:55.053 Test: blockdev writev readv 8 blocks ...passed 00:12:55.053 Test: blockdev writev readv 30 x 1block ...passed 00:12:55.053 Test: blockdev writev readv block ...passed 00:12:55.053 Test: blockdev writev readv size > 128k ...passed 00:12:55.053 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:55.053 Test: blockdev comparev and writev ...[2024-11-26 20:33:55.360458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:55.053 [2024-11-26 20:33:55.360496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:55.053 [2024-11-26 20:33:55.360517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:55.053 [2024-11-26 20:33:55.360528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:55.053 [2024-11-26 20:33:55.360795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:55.053 [2024-11-26 20:33:55.360818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:55.053 [2024-11-26 20:33:55.360836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:55.053 [2024-11-26 20:33:55.360847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:55.053 [2024-11-26 20:33:55.361119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:55.053 [2024-11-26 20:33:55.361141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:55.053 [2024-11-26 20:33:55.361158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:55.053 [2024-11-26 20:33:55.361168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:55.053 [2024-11-26 20:33:55.361508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:55.053 [2024-11-26 20:33:55.361530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:55.053 [2024-11-26 20:33:55.361546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:55.053 [2024-11-26 20:33:55.361556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:55.053 passed 00:12:55.053 Test: blockdev nvme passthru rw ...passed 00:12:55.053 Test: blockdev nvme passthru vendor specific ...[2024-11-26 20:33:55.362658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:55.053 [2024-11-26 20:33:55.362683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:55.053 [2024-11-26 20:33:55.362790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:55.053 [2024-11-26 20:33:55.362806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:55.053 [2024-11-26 20:33:55.362906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:55.053 [2024-11-26 20:33:55.362922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:55.053 [2024-11-26 20:33:55.363013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:55.053 [2024-11-26 20:33:55.363029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:55.053 passed 00:12:55.053 Test: blockdev nvme admin passthru ...passed 00:12:55.053 Test: blockdev copy ...passed 00:12:55.053 00:12:55.053 Run Summary: Type Total Ran Passed Failed Inactive 00:12:55.053 suites 1 1 n/a 0 0 00:12:55.053 tests 23 23 23 0 0 00:12:55.053 asserts 152 152 152 0 n/a 00:12:55.053 00:12:55.053 Elapsed time = 0.184 seconds 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:55.620 rmmod nvme_tcp 00:12:55.620 rmmod nvme_fabrics 00:12:55.620 rmmod nvme_keyring 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 71075 ']' 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 71075 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 71075 ']' 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 71075 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71075 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:55.620 killing process with pid 71075 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71075' 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 71075 00:12:55.620 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 71075 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.187 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.446 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:12:56.446 00:12:56.446 real 0m2.912s 00:12:56.446 user 0m8.018s 00:12:56.446 sys 0m1.420s 00:12:56.446 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:56.446 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:56.446 ************************************ 00:12:56.446 END TEST nvmf_bdevio_no_huge 00:12:56.446 ************************************ 00:12:56.446 20:33:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:56.446 20:33:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:56.446 20:33:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:56.446 20:33:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:56.446 ************************************ 00:12:56.446 START TEST nvmf_tls 00:12:56.446 ************************************ 00:12:56.446 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:56.446 * Looking for test storage... 00:12:56.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:56.446 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:56.446 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:12:56.446 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:56.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.705 --rc genhtml_branch_coverage=1 00:12:56.705 --rc genhtml_function_coverage=1 00:12:56.705 --rc genhtml_legend=1 00:12:56.705 --rc geninfo_all_blocks=1 00:12:56.705 --rc geninfo_unexecuted_blocks=1 00:12:56.705 00:12:56.705 ' 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:56.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.705 --rc genhtml_branch_coverage=1 00:12:56.705 --rc genhtml_function_coverage=1 00:12:56.705 --rc genhtml_legend=1 00:12:56.705 --rc geninfo_all_blocks=1 00:12:56.705 --rc geninfo_unexecuted_blocks=1 00:12:56.705 00:12:56.705 ' 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:56.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.705 --rc genhtml_branch_coverage=1 00:12:56.705 --rc genhtml_function_coverage=1 00:12:56.705 --rc genhtml_legend=1 00:12:56.705 --rc geninfo_all_blocks=1 00:12:56.705 --rc geninfo_unexecuted_blocks=1 00:12:56.705 00:12:56.705 ' 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:56.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.705 --rc genhtml_branch_coverage=1 00:12:56.705 --rc genhtml_function_coverage=1 00:12:56.705 --rc genhtml_legend=1 00:12:56.705 --rc geninfo_all_blocks=1 00:12:56.705 --rc geninfo_unexecuted_blocks=1 00:12:56.705 00:12:56.705 ' 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.705 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:56.706 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:56.706 Cannot find device "nvmf_init_br" 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:56.706 Cannot find device "nvmf_init_br2" 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:56.706 Cannot find device "nvmf_tgt_br" 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:56.706 Cannot find device "nvmf_tgt_br2" 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:56.706 Cannot find device "nvmf_init_br" 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:56.706 Cannot find device "nvmf_init_br2" 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:56.706 Cannot find device "nvmf_tgt_br" 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:56.706 Cannot find device "nvmf_tgt_br2" 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:56.706 Cannot find device "nvmf_br" 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:56.706 Cannot find device "nvmf_init_if" 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:56.706 Cannot find device "nvmf_init_if2" 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:56.706 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:56.706 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:56.706 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:56.706 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:56.706 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:56.706 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:56.965 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:56.965 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:12:56.965 00:12:56.965 --- 10.0.0.3 ping statistics --- 00:12:56.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.965 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:56.965 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:56.965 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:12:56.965 00:12:56.965 --- 10.0.0.4 ping statistics --- 00:12:56.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.965 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:56.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:12:56.965 00:12:56.965 --- 10.0.0.1 ping statistics --- 00:12:56.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.965 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:56.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:12:56.965 00:12:56.965 --- 10.0.0.2 ping statistics --- 00:12:56.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.965 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71343 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71343 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71343 ']' 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:56.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:56.965 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:57.223 [2024-11-26 20:33:57.345875] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:12:57.223 [2024-11-26 20:33:57.346575] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.223 [2024-11-26 20:33:57.498069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.223 [2024-11-26 20:33:57.560964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.223 [2024-11-26 20:33:57.561022] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.223 [2024-11-26 20:33:57.561034] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.223 [2024-11-26 20:33:57.561043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.223 [2024-11-26 20:33:57.561051] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.223 [2024-11-26 20:33:57.561512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.228 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:58.228 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:58.228 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:58.228 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:58.228 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:58.228 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:58.228 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:12:58.228 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:58.487 true 00:12:58.487 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:58.487 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:12:58.747 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:12:58.747 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:12:58.747 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:59.007 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:59.007 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:12:59.267 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:12:59.267 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:12:59.267 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:12:59.526 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:59.526 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:13:00.094 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:13:00.094 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:13:00.094 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:13:00.094 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:00.373 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:13:00.373 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:13:00.373 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:00.632 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:00.632 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:13:00.889 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:13:00.889 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:13:00.889 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:01.146 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:01.146 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.Yccnk6Y0pq 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.I5avx2drxz 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Yccnk6Y0pq 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.I5avx2drxz 00:13:01.406 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:01.663 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:01.921 [2024-11-26 20:34:02.267476] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:02.180 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.Yccnk6Y0pq 00:13:02.180 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Yccnk6Y0pq 00:13:02.180 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:02.439 [2024-11-26 20:34:02.601708] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.439 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:02.704 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:02.969 [2024-11-26 20:34:03.213849] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:02.969 [2024-11-26 20:34:03.214123] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:02.969 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:03.536 malloc0 00:13:03.536 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:03.536 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Yccnk6Y0pq 00:13:04.102 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:04.102 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Yccnk6Y0pq 00:13:16.330 Initializing NVMe Controllers 00:13:16.330 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:16.330 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:16.330 Initialization complete. Launching workers. 00:13:16.330 ======================================================== 00:13:16.330 Latency(us) 00:13:16.330 Device Information : IOPS MiB/s Average min max 00:13:16.330 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7453.77 29.12 8589.17 1430.45 17385.56 00:13:16.330 ======================================================== 00:13:16.330 Total : 7453.77 29.12 8589.17 1430.45 17385.56 00:13:16.330 00:13:16.330 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Yccnk6Y0pq 00:13:16.330 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:16.330 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:16.330 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:16.330 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Yccnk6Y0pq 00:13:16.330 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:16.330 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71593 00:13:16.331 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:16.331 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:16.331 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71593 /var/tmp/bdevperf.sock 00:13:16.331 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71593 ']' 00:13:16.331 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:16.331 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:16.331 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:16.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:16.331 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:16.331 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:16.331 [2024-11-26 20:34:14.696548] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:13:16.331 [2024-11-26 20:34:14.697259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71593 ] 00:13:16.331 [2024-11-26 20:34:14.847792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.331 [2024-11-26 20:34:14.917830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.331 [2024-11-26 20:34:14.978558] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:16.331 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:16.331 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:16.331 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Yccnk6Y0pq 00:13:16.331 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:16.331 [2024-11-26 20:34:15.609868] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:16.331 TLSTESTn1 00:13:16.331 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:16.331 Running I/O for 10 seconds... 00:13:17.527 3478.00 IOPS, 13.59 MiB/s [2024-11-26T20:34:19.253Z] 3597.00 IOPS, 14.05 MiB/s [2024-11-26T20:34:20.204Z] 3703.67 IOPS, 14.47 MiB/s [2024-11-26T20:34:21.136Z] 3771.75 IOPS, 14.73 MiB/s [2024-11-26T20:34:22.070Z] 3818.80 IOPS, 14.92 MiB/s [2024-11-26T20:34:23.006Z] 3837.33 IOPS, 14.99 MiB/s [2024-11-26T20:34:23.942Z] 3856.29 IOPS, 15.06 MiB/s [2024-11-26T20:34:24.879Z] 3862.12 IOPS, 15.09 MiB/s [2024-11-26T20:34:26.255Z] 3856.22 IOPS, 15.06 MiB/s [2024-11-26T20:34:26.255Z] 3871.50 IOPS, 15.12 MiB/s 00:13:25.900 Latency(us) 00:13:25.900 [2024-11-26T20:34:26.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.900 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:25.900 Verification LBA range: start 0x0 length 0x2000 00:13:25.900 TLSTESTn1 : 10.02 3877.65 15.15 0.00 0.00 32948.31 6017.40 31218.97 00:13:25.900 [2024-11-26T20:34:26.255Z] =================================================================================================================== 00:13:25.900 [2024-11-26T20:34:26.255Z] Total : 3877.65 15.15 0.00 0.00 32948.31 6017.40 31218.97 00:13:25.900 { 00:13:25.900 "results": [ 00:13:25.900 { 00:13:25.900 "job": "TLSTESTn1", 00:13:25.900 "core_mask": "0x4", 00:13:25.900 "workload": "verify", 00:13:25.900 "status": "finished", 00:13:25.900 "verify_range": { 00:13:25.900 "start": 0, 00:13:25.900 "length": 8192 00:13:25.900 }, 00:13:25.900 "queue_depth": 128, 00:13:25.900 "io_size": 4096, 00:13:25.900 "runtime": 10.016635, 00:13:25.900 "iops": 3877.6495300068336, 00:13:25.900 "mibps": 15.147068476589194, 00:13:25.900 "io_failed": 0, 00:13:25.900 "io_timeout": 0, 00:13:25.900 "avg_latency_us": 32948.314461709866, 00:13:25.900 "min_latency_us": 6017.396363636363, 00:13:25.900 "max_latency_us": 31218.967272727274 00:13:25.900 } 00:13:25.900 ], 00:13:25.900 "core_count": 1 00:13:25.900 } 00:13:25.900 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:25.900 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71593 00:13:25.900 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71593 ']' 00:13:25.900 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71593 00:13:25.900 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:25.900 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:25.900 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71593 00:13:25.900 killing process with pid 71593 00:13:25.900 Received shutdown signal, test time was about 10.000000 seconds 00:13:25.900 00:13:25.900 Latency(us) 00:13:25.900 [2024-11-26T20:34:26.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.900 [2024-11-26T20:34:26.255Z] =================================================================================================================== 00:13:25.900 [2024-11-26T20:34:26.255Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:25.900 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:25.900 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:25.900 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71593' 00:13:25.900 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71593 00:13:25.900 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71593 00:13:25.900 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.I5avx2drxz 00:13:25.900 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:25.900 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.I5avx2drxz 00:13:25.900 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:25.900 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:25.900 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:25.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:25.900 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:25.900 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.I5avx2drxz 00:13:25.900 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:25.900 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:25.900 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:25.900 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.I5avx2drxz 00:13:25.900 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:25.900 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71720 00:13:25.900 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:25.900 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:25.900 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71720 /var/tmp/bdevperf.sock 00:13:25.900 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71720 ']' 00:13:25.900 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:25.900 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:25.900 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:25.900 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:25.900 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:25.900 [2024-11-26 20:34:26.179403] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:13:25.900 [2024-11-26 20:34:26.179553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71720 ] 00:13:26.159 [2024-11-26 20:34:26.332549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.159 [2024-11-26 20:34:26.393013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.159 [2024-11-26 20:34:26.447116] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:27.094 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:27.094 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:27.094 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.I5avx2drxz 00:13:27.362 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:27.622 [2024-11-26 20:34:27.817679] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:27.622 [2024-11-26 20:34:27.822811] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:27.622 [2024-11-26 20:34:27.823412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x642ff0 (107): Transport endpoint is not connected 00:13:27.622 [2024-11-26 20:34:27.824399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x642ff0 (9): Bad file descriptor 00:13:27.622 [2024-11-26 20:34:27.825395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:27.622 [2024-11-26 20:34:27.825420] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:27.622 [2024-11-26 20:34:27.825431] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:27.622 [2024-11-26 20:34:27.825446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:27.622 request: 00:13:27.622 { 00:13:27.622 "name": "TLSTEST", 00:13:27.622 "trtype": "tcp", 00:13:27.622 "traddr": "10.0.0.3", 00:13:27.622 "adrfam": "ipv4", 00:13:27.622 "trsvcid": "4420", 00:13:27.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:27.622 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:27.622 "prchk_reftag": false, 00:13:27.622 "prchk_guard": false, 00:13:27.622 "hdgst": false, 00:13:27.622 "ddgst": false, 00:13:27.622 "psk": "key0", 00:13:27.622 "allow_unrecognized_csi": false, 00:13:27.622 "method": "bdev_nvme_attach_controller", 00:13:27.622 "req_id": 1 00:13:27.622 } 00:13:27.622 Got JSON-RPC error response 00:13:27.622 response: 00:13:27.622 { 00:13:27.622 "code": -5, 00:13:27.622 "message": "Input/output error" 00:13:27.622 } 00:13:27.622 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71720 00:13:27.622 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71720 ']' 00:13:27.622 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71720 00:13:27.622 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:27.622 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.622 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71720 00:13:27.622 killing process with pid 71720 00:13:27.622 Received shutdown signal, test time was about 10.000000 seconds 00:13:27.622 00:13:27.622 Latency(us) 00:13:27.622 [2024-11-26T20:34:27.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.622 [2024-11-26T20:34:27.977Z] =================================================================================================================== 00:13:27.622 [2024-11-26T20:34:27.977Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:27.622 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:27.622 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:27.622 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71720' 00:13:27.622 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71720 00:13:27.622 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71720 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Yccnk6Y0pq 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Yccnk6Y0pq 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:27.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Yccnk6Y0pq 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Yccnk6Y0pq 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71749 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71749 /var/tmp/bdevperf.sock 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71749 ']' 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.882 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:27.882 [2024-11-26 20:34:28.125871] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:13:27.882 [2024-11-26 20:34:28.125978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71749 ] 00:13:28.167 [2024-11-26 20:34:28.274877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.167 [2024-11-26 20:34:28.335221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.167 [2024-11-26 20:34:28.390200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:28.167 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.167 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:28.167 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Yccnk6Y0pq 00:13:28.436 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:13:28.695 [2024-11-26 20:34:28.950499] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:28.695 [2024-11-26 20:34:28.961931] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:28.695 [2024-11-26 20:34:28.961975] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:28.695 [2024-11-26 20:34:28.962027] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:28.695 [2024-11-26 20:34:28.962164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x793ff0 (107): Transport endpoint is not connected 00:13:28.695 [2024-11-26 20:34:28.963156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x793ff0 (9): Bad file descriptor 00:13:28.695 [2024-11-26 20:34:28.964152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:28.695 [2024-11-26 20:34:28.964178] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:28.695 [2024-11-26 20:34:28.964190] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:28.695 [2024-11-26 20:34:28.964205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:28.695 request: 00:13:28.695 { 00:13:28.695 "name": "TLSTEST", 00:13:28.695 "trtype": "tcp", 00:13:28.695 "traddr": "10.0.0.3", 00:13:28.695 "adrfam": "ipv4", 00:13:28.695 "trsvcid": "4420", 00:13:28.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:28.695 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:28.695 "prchk_reftag": false, 00:13:28.695 "prchk_guard": false, 00:13:28.695 "hdgst": false, 00:13:28.695 "ddgst": false, 00:13:28.695 "psk": "key0", 00:13:28.695 "allow_unrecognized_csi": false, 00:13:28.695 "method": "bdev_nvme_attach_controller", 00:13:28.695 "req_id": 1 00:13:28.695 } 00:13:28.695 Got JSON-RPC error response 00:13:28.695 response: 00:13:28.695 { 00:13:28.695 "code": -5, 00:13:28.695 "message": "Input/output error" 00:13:28.695 } 00:13:28.695 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71749 00:13:28.695 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71749 ']' 00:13:28.695 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71749 00:13:28.695 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:28.695 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:28.695 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71749 00:13:28.695 killing process with pid 71749 00:13:28.695 Received shutdown signal, test time was about 10.000000 seconds 00:13:28.695 00:13:28.695 Latency(us) 00:13:28.695 [2024-11-26T20:34:29.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.695 [2024-11-26T20:34:29.050Z] =================================================================================================================== 00:13:28.695 [2024-11-26T20:34:29.050Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:28.695 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:28.696 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:28.696 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71749' 00:13:28.696 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71749 00:13:28.696 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71749 00:13:28.954 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:28.954 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:28.954 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:28.954 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:28.954 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:28.954 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Yccnk6Y0pq 00:13:28.954 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:28.954 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Yccnk6Y0pq 00:13:28.954 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:28.954 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:28.954 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:28.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:28.954 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:28.954 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Yccnk6Y0pq 00:13:28.954 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:28.954 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:28.954 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:28.954 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Yccnk6Y0pq 00:13:28.954 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:28.954 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71770 00:13:28.954 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:28.954 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71770 /var/tmp/bdevperf.sock 00:13:28.954 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71770 ']' 00:13:28.954 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:28.955 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.955 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:28.955 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:28.955 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.955 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:28.955 [2024-11-26 20:34:29.269843] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:13:28.955 [2024-11-26 20:34:29.269938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71770 ] 00:13:29.213 [2024-11-26 20:34:29.417232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.213 [2024-11-26 20:34:29.474348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.213 [2024-11-26 20:34:29.529496] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:30.148 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:30.148 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:30.148 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Yccnk6Y0pq 00:13:30.405 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:30.665 [2024-11-26 20:34:30.828034] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:30.665 [2024-11-26 20:34:30.833132] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:30.665 [2024-11-26 20:34:30.833175] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:30.665 [2024-11-26 20:34:30.833243] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:30.665 [2024-11-26 20:34:30.833863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199bff0 (107): Transport endpoint is not connected 00:13:30.665 [2024-11-26 20:34:30.834847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199bff0 (9): Bad file descriptor 00:13:30.665 [2024-11-26 20:34:30.835842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:13:30.665 [2024-11-26 20:34:30.835882] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:30.665 [2024-11-26 20:34:30.835902] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:13:30.665 [2024-11-26 20:34:30.835928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:13:30.665 request: 00:13:30.665 { 00:13:30.665 "name": "TLSTEST", 00:13:30.665 "trtype": "tcp", 00:13:30.665 "traddr": "10.0.0.3", 00:13:30.665 "adrfam": "ipv4", 00:13:30.665 "trsvcid": "4420", 00:13:30.665 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:30.665 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:30.665 "prchk_reftag": false, 00:13:30.665 "prchk_guard": false, 00:13:30.665 "hdgst": false, 00:13:30.665 "ddgst": false, 00:13:30.665 "psk": "key0", 00:13:30.665 "allow_unrecognized_csi": false, 00:13:30.665 "method": "bdev_nvme_attach_controller", 00:13:30.665 "req_id": 1 00:13:30.665 } 00:13:30.665 Got JSON-RPC error response 00:13:30.665 response: 00:13:30.665 { 00:13:30.665 "code": -5, 00:13:30.665 "message": "Input/output error" 00:13:30.665 } 00:13:30.665 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71770 00:13:30.665 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71770 ']' 00:13:30.665 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71770 00:13:30.665 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:30.665 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:30.665 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71770 00:13:30.665 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:30.665 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:30.665 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71770' 00:13:30.665 killing process with pid 71770 00:13:30.665 Received shutdown signal, test time was about 10.000000 seconds 00:13:30.665 00:13:30.665 Latency(us) 00:13:30.665 [2024-11-26T20:34:31.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.665 [2024-11-26T20:34:31.020Z] =================================================================================================================== 00:13:30.665 [2024-11-26T20:34:31.020Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:30.665 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71770 00:13:30.665 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71770 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71804 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71804 /var/tmp/bdevperf.sock 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71804 ']' 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:30.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:30.925 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:30.925 [2024-11-26 20:34:31.139100] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:13:30.925 [2024-11-26 20:34:31.139187] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71804 ] 00:13:31.183 [2024-11-26 20:34:31.281659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.183 [2024-11-26 20:34:31.342232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.183 [2024-11-26 20:34:31.397461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:32.116 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:32.116 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:32.116 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:13:32.116 [2024-11-26 20:34:32.460139] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:13:32.116 [2024-11-26 20:34:32.460216] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:32.116 request: 00:13:32.116 { 00:13:32.116 "name": "key0", 00:13:32.116 "path": "", 00:13:32.116 "method": "keyring_file_add_key", 00:13:32.116 "req_id": 1 00:13:32.116 } 00:13:32.116 Got JSON-RPC error response 00:13:32.116 response: 00:13:32.116 { 00:13:32.116 "code": -1, 00:13:32.116 "message": "Operation not permitted" 00:13:32.116 } 00:13:32.374 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:32.632 [2024-11-26 20:34:32.752308] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:32.632 [2024-11-26 20:34:32.752369] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:32.632 request: 00:13:32.632 { 00:13:32.632 "name": "TLSTEST", 00:13:32.632 "trtype": "tcp", 00:13:32.632 "traddr": "10.0.0.3", 00:13:32.632 "adrfam": "ipv4", 00:13:32.632 "trsvcid": "4420", 00:13:32.632 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.632 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:32.633 "prchk_reftag": false, 00:13:32.633 "prchk_guard": false, 00:13:32.633 "hdgst": false, 00:13:32.633 "ddgst": false, 00:13:32.633 "psk": "key0", 00:13:32.633 "allow_unrecognized_csi": false, 00:13:32.633 "method": "bdev_nvme_attach_controller", 00:13:32.633 "req_id": 1 00:13:32.633 } 00:13:32.633 Got JSON-RPC error response 00:13:32.633 response: 00:13:32.633 { 00:13:32.633 "code": -126, 00:13:32.633 "message": "Required key not available" 00:13:32.633 } 00:13:32.633 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71804 00:13:32.633 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71804 ']' 00:13:32.633 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71804 00:13:32.633 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:32.633 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:32.633 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71804 00:13:32.633 killing process with pid 71804 00:13:32.633 Received shutdown signal, test time was about 10.000000 seconds 00:13:32.633 00:13:32.633 Latency(us) 00:13:32.633 [2024-11-26T20:34:32.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:32.633 [2024-11-26T20:34:32.988Z] =================================================================================================================== 00:13:32.633 [2024-11-26T20:34:32.988Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:32.633 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:32.633 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:32.633 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71804' 00:13:32.633 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71804 00:13:32.633 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71804 00:13:32.891 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:32.891 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:32.891 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:32.891 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:32.891 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:32.891 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71343 00:13:32.891 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71343 ']' 00:13:32.891 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71343 00:13:32.891 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:32.891 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:32.891 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71343 00:13:32.891 killing process with pid 71343 00:13:32.891 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:32.891 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:32.891 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71343' 00:13:32.891 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71343 00:13:32.892 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71343 00:13:32.892 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:32.892 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:32.892 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:32.892 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:32.892 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:32.892 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:13:32.892 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:33.150 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:33.150 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:13:33.150 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.fVGwUt2wMd 00:13:33.150 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:33.150 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.fVGwUt2wMd 00:13:33.150 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:13:33.150 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:33.150 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:33.150 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:33.150 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71848 00:13:33.150 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:33.150 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71848 00:13:33.150 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71848 ']' 00:13:33.150 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.150 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.150 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.150 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.150 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:33.150 [2024-11-26 20:34:33.374113] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:13:33.150 [2024-11-26 20:34:33.374257] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.408 [2024-11-26 20:34:33.532872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.408 [2024-11-26 20:34:33.593491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.408 [2024-11-26 20:34:33.593551] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.408 [2024-11-26 20:34:33.593565] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.408 [2024-11-26 20:34:33.593576] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.408 [2024-11-26 20:34:33.593584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.408 [2024-11-26 20:34:33.594025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.408 [2024-11-26 20:34:33.649526] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:34.342 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:34.342 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:34.342 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:34.342 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:34.342 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:34.342 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.342 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.fVGwUt2wMd 00:13:34.342 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fVGwUt2wMd 00:13:34.342 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:34.342 [2024-11-26 20:34:34.626479] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:34.342 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:34.672 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:34.943 [2024-11-26 20:34:35.170621] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:34.943 [2024-11-26 20:34:35.170862] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:34.943 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:35.201 malloc0 00:13:35.201 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:35.461 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fVGwUt2wMd 00:13:36.028 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:36.028 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fVGwUt2wMd 00:13:36.028 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:36.028 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:36.029 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:36.029 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.fVGwUt2wMd 00:13:36.029 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:36.029 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:36.029 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71909 00:13:36.029 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:36.029 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71909 /var/tmp/bdevperf.sock 00:13:36.029 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71909 ']' 00:13:36.029 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:36.029 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:36.029 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:36.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:36.029 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:36.029 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:36.029 [2024-11-26 20:34:36.381476] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:13:36.029 [2024-11-26 20:34:36.381565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71909 ] 00:13:36.287 [2024-11-26 20:34:36.531344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.287 [2024-11-26 20:34:36.594782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.545 [2024-11-26 20:34:36.652952] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:37.112 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:37.112 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:37.112 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fVGwUt2wMd 00:13:37.370 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:37.628 [2024-11-26 20:34:37.924196] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:37.894 TLSTESTn1 00:13:37.894 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:37.894 Running I/O for 10 seconds... 00:13:40.200 3975.00 IOPS, 15.53 MiB/s [2024-11-26T20:34:41.490Z] 4075.00 IOPS, 15.92 MiB/s [2024-11-26T20:34:42.425Z] 4103.33 IOPS, 16.03 MiB/s [2024-11-26T20:34:43.359Z] 4120.00 IOPS, 16.09 MiB/s [2024-11-26T20:34:44.294Z] 4130.20 IOPS, 16.13 MiB/s [2024-11-26T20:34:45.230Z] 4137.00 IOPS, 16.16 MiB/s [2024-11-26T20:34:46.164Z] 4139.71 IOPS, 16.17 MiB/s [2024-11-26T20:34:47.539Z] 4139.25 IOPS, 16.17 MiB/s [2024-11-26T20:34:48.475Z] 4140.89 IOPS, 16.18 MiB/s [2024-11-26T20:34:48.475Z] 4141.40 IOPS, 16.18 MiB/s 00:13:48.120 Latency(us) 00:13:48.120 [2024-11-26T20:34:48.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.120 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:48.120 Verification LBA range: start 0x0 length 0x2000 00:13:48.120 TLSTESTn1 : 10.02 4147.14 16.20 0.00 0.00 30807.25 5898.24 25141.99 00:13:48.120 [2024-11-26T20:34:48.475Z] =================================================================================================================== 00:13:48.120 [2024-11-26T20:34:48.475Z] Total : 4147.14 16.20 0.00 0.00 30807.25 5898.24 25141.99 00:13:48.120 { 00:13:48.120 "results": [ 00:13:48.120 { 00:13:48.120 "job": "TLSTESTn1", 00:13:48.120 "core_mask": "0x4", 00:13:48.120 "workload": "verify", 00:13:48.120 "status": "finished", 00:13:48.120 "verify_range": { 00:13:48.120 "start": 0, 00:13:48.120 "length": 8192 00:13:48.120 }, 00:13:48.120 "queue_depth": 128, 00:13:48.120 "io_size": 4096, 00:13:48.120 "runtime": 10.016532, 00:13:48.120 "iops": 4147.143941635688, 00:13:48.120 "mibps": 16.199781022014406, 00:13:48.120 "io_failed": 0, 00:13:48.120 "io_timeout": 0, 00:13:48.120 "avg_latency_us": 30807.25105685648, 00:13:48.120 "min_latency_us": 5898.24, 00:13:48.120 "max_latency_us": 25141.992727272725 00:13:48.120 } 00:13:48.120 ], 00:13:48.120 "core_count": 1 00:13:48.120 } 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71909 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71909 ']' 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71909 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71909 00:13:48.120 killing process with pid 71909 00:13:48.120 Received shutdown signal, test time was about 10.000000 seconds 00:13:48.120 00:13:48.120 Latency(us) 00:13:48.120 [2024-11-26T20:34:48.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.120 [2024-11-26T20:34:48.475Z] =================================================================================================================== 00:13:48.120 [2024-11-26T20:34:48.475Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71909' 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71909 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71909 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.fVGwUt2wMd 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fVGwUt2wMd 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fVGwUt2wMd 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fVGwUt2wMd 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.fVGwUt2wMd 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72046 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72046 /var/tmp/bdevperf.sock 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72046 ']' 00:13:48.120 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:48.121 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:48.121 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:48.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:48.121 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:48.121 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:48.121 [2024-11-26 20:34:48.464832] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:13:48.121 [2024-11-26 20:34:48.465092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72046 ] 00:13:48.378 [2024-11-26 20:34:48.606117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.378 [2024-11-26 20:34:48.664315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.378 [2024-11-26 20:34:48.721336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:48.635 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:48.635 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:48.635 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fVGwUt2wMd 00:13:48.894 [2024-11-26 20:34:49.073550] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.fVGwUt2wMd': 0100666 00:13:48.894 [2024-11-26 20:34:49.073793] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:48.894 request: 00:13:48.894 { 00:13:48.894 "name": "key0", 00:13:48.894 "path": "/tmp/tmp.fVGwUt2wMd", 00:13:48.894 "method": "keyring_file_add_key", 00:13:48.894 "req_id": 1 00:13:48.894 } 00:13:48.894 Got JSON-RPC error response 00:13:48.894 response: 00:13:48.894 { 00:13:48.894 "code": -1, 00:13:48.894 "message": "Operation not permitted" 00:13:48.894 } 00:13:48.894 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:49.156 [2024-11-26 20:34:49.373722] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:49.156 [2024-11-26 20:34:49.374034] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:49.156 request: 00:13:49.156 { 00:13:49.156 "name": "TLSTEST", 00:13:49.156 "trtype": "tcp", 00:13:49.156 "traddr": "10.0.0.3", 00:13:49.156 "adrfam": "ipv4", 00:13:49.156 "trsvcid": "4420", 00:13:49.156 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:49.156 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:49.156 "prchk_reftag": false, 00:13:49.156 "prchk_guard": false, 00:13:49.156 "hdgst": false, 00:13:49.156 "ddgst": false, 00:13:49.156 "psk": "key0", 00:13:49.156 "allow_unrecognized_csi": false, 00:13:49.156 "method": "bdev_nvme_attach_controller", 00:13:49.156 "req_id": 1 00:13:49.156 } 00:13:49.157 Got JSON-RPC error response 00:13:49.157 response: 00:13:49.157 { 00:13:49.157 "code": -126, 00:13:49.157 "message": "Required key not available" 00:13:49.157 } 00:13:49.157 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72046 00:13:49.157 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72046 ']' 00:13:49.157 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72046 00:13:49.157 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:49.157 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.157 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72046 00:13:49.157 killing process with pid 72046 00:13:49.157 Received shutdown signal, test time was about 10.000000 seconds 00:13:49.157 00:13:49.157 Latency(us) 00:13:49.157 [2024-11-26T20:34:49.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.157 [2024-11-26T20:34:49.512Z] =================================================================================================================== 00:13:49.157 [2024-11-26T20:34:49.512Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:49.157 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:49.157 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:49.157 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72046' 00:13:49.157 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72046 00:13:49.157 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72046 00:13:49.415 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:49.415 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:49.415 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:49.415 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:49.415 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:49.415 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71848 00:13:49.415 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71848 ']' 00:13:49.415 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71848 00:13:49.415 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:49.415 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.415 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71848 00:13:49.415 killing process with pid 71848 00:13:49.415 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:49.415 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:49.415 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71848' 00:13:49.415 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71848 00:13:49.415 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71848 00:13:49.673 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:13:49.673 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:49.673 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:49.673 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:49.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.673 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72076 00:13:49.673 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:49.673 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72076 00:13:49.673 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72076 ']' 00:13:49.673 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.673 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.673 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.673 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.673 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:49.673 [2024-11-26 20:34:49.928017] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:13:49.673 [2024-11-26 20:34:49.928272] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.932 [2024-11-26 20:34:50.075956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.932 [2024-11-26 20:34:50.121435] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.932 [2024-11-26 20:34:50.121708] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.932 [2024-11-26 20:34:50.121881] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.932 [2024-11-26 20:34:50.122085] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.932 [2024-11-26 20:34:50.122100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.932 [2024-11-26 20:34:50.122488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.932 [2024-11-26 20:34:50.174528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:49.932 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:49.932 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:49.932 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:49.932 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:49.932 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:49.932 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.932 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.fVGwUt2wMd 00:13:49.932 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:49.932 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.fVGwUt2wMd 00:13:49.932 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:13:49.932 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.932 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:13:49.932 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.932 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.fVGwUt2wMd 00:13:49.932 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fVGwUt2wMd 00:13:49.932 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:50.499 [2024-11-26 20:34:50.559064] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.499 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:50.757 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:51.016 [2024-11-26 20:34:51.143200] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:51.016 [2024-11-26 20:34:51.143589] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:51.016 20:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:51.274 malloc0 00:13:51.274 20:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:51.533 20:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fVGwUt2wMd 00:13:51.791 [2024-11-26 20:34:51.942042] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.fVGwUt2wMd': 0100666 00:13:51.791 [2024-11-26 20:34:51.942097] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:51.791 request: 00:13:51.791 { 00:13:51.791 "name": "key0", 00:13:51.791 "path": "/tmp/tmp.fVGwUt2wMd", 00:13:51.791 "method": "keyring_file_add_key", 00:13:51.791 "req_id": 1 00:13:51.791 } 00:13:51.791 Got JSON-RPC error response 00:13:51.791 response: 00:13:51.791 { 00:13:51.791 "code": -1, 00:13:51.791 "message": "Operation not permitted" 00:13:51.791 } 00:13:51.791 20:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:52.050 [2024-11-26 20:34:52.198130] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:13:52.050 [2024-11-26 20:34:52.198362] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:13:52.050 request: 00:13:52.050 { 00:13:52.050 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:52.050 "host": "nqn.2016-06.io.spdk:host1", 00:13:52.050 "psk": "key0", 00:13:52.050 "method": "nvmf_subsystem_add_host", 00:13:52.050 "req_id": 1 00:13:52.050 } 00:13:52.050 Got JSON-RPC error response 00:13:52.050 response: 00:13:52.050 { 00:13:52.050 "code": -32603, 00:13:52.050 "message": "Internal error" 00:13:52.050 } 00:13:52.050 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:52.050 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:52.050 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:52.050 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:52.050 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 72076 00:13:52.050 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72076 ']' 00:13:52.050 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72076 00:13:52.050 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:52.050 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:52.050 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72076 00:13:52.050 killing process with pid 72076 00:13:52.050 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:52.050 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:52.050 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72076' 00:13:52.050 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72076 00:13:52.050 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72076 00:13:52.309 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.fVGwUt2wMd 00:13:52.309 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:13:52.309 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:52.309 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:52.309 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:52.309 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72139 00:13:52.309 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:52.309 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72139 00:13:52.309 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72139 ']' 00:13:52.309 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.309 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:52.310 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.310 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:52.310 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:52.310 [2024-11-26 20:34:52.503358] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:13:52.310 [2024-11-26 20:34:52.503595] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.310 [2024-11-26 20:34:52.647734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.568 [2024-11-26 20:34:52.703625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.568 [2024-11-26 20:34:52.703692] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.568 [2024-11-26 20:34:52.703706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.568 [2024-11-26 20:34:52.703715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.568 [2024-11-26 20:34:52.703722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.568 [2024-11-26 20:34:52.704126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.568 [2024-11-26 20:34:52.757355] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:52.568 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.568 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:52.568 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:52.568 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:52.568 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:52.569 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.569 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.fVGwUt2wMd 00:13:52.569 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fVGwUt2wMd 00:13:52.569 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:52.827 [2024-11-26 20:34:53.154124] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.827 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:53.400 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:53.400 [2024-11-26 20:34:53.706392] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:53.400 [2024-11-26 20:34:53.706988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:53.400 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:53.672 malloc0 00:13:53.931 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:54.190 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fVGwUt2wMd 00:13:54.190 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:54.448 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:54.448 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72188 00:13:54.448 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:54.448 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72188 /var/tmp/bdevperf.sock 00:13:54.448 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72188 ']' 00:13:54.448 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:54.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:54.448 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:54.448 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:54.448 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:54.448 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:54.707 [2024-11-26 20:34:54.841435] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:13:54.707 [2024-11-26 20:34:54.841748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72188 ] 00:13:54.707 [2024-11-26 20:34:54.991281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.707 [2024-11-26 20:34:55.048012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.965 [2024-11-26 20:34:55.102881] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:54.965 20:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:54.965 20:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:54.965 20:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fVGwUt2wMd 00:13:55.224 20:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:55.482 [2024-11-26 20:34:55.684028] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:55.482 TLSTESTn1 00:13:55.482 20:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:56.050 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:13:56.050 "subsystems": [ 00:13:56.050 { 00:13:56.050 "subsystem": "keyring", 00:13:56.050 "config": [ 00:13:56.050 { 00:13:56.050 "method": "keyring_file_add_key", 00:13:56.050 "params": { 00:13:56.050 "name": "key0", 00:13:56.050 "path": "/tmp/tmp.fVGwUt2wMd" 00:13:56.050 } 00:13:56.050 } 00:13:56.050 ] 00:13:56.050 }, 00:13:56.050 { 00:13:56.050 "subsystem": "iobuf", 00:13:56.050 "config": [ 00:13:56.050 { 00:13:56.050 "method": "iobuf_set_options", 00:13:56.050 "params": { 00:13:56.050 "small_pool_count": 8192, 00:13:56.050 "large_pool_count": 1024, 00:13:56.050 "small_bufsize": 8192, 00:13:56.050 "large_bufsize": 135168, 00:13:56.050 "enable_numa": false 00:13:56.050 } 00:13:56.050 } 00:13:56.050 ] 00:13:56.050 }, 00:13:56.050 { 00:13:56.050 "subsystem": "sock", 00:13:56.050 "config": [ 00:13:56.050 { 00:13:56.050 "method": "sock_set_default_impl", 00:13:56.050 "params": { 00:13:56.050 "impl_name": "uring" 00:13:56.050 } 00:13:56.050 }, 00:13:56.050 { 00:13:56.050 "method": "sock_impl_set_options", 00:13:56.050 "params": { 00:13:56.050 "impl_name": "ssl", 00:13:56.050 "recv_buf_size": 4096, 00:13:56.050 "send_buf_size": 4096, 00:13:56.050 "enable_recv_pipe": true, 00:13:56.050 "enable_quickack": false, 00:13:56.050 "enable_placement_id": 0, 00:13:56.050 "enable_zerocopy_send_server": true, 00:13:56.050 "enable_zerocopy_send_client": false, 00:13:56.050 "zerocopy_threshold": 0, 00:13:56.050 "tls_version": 0, 00:13:56.050 "enable_ktls": false 00:13:56.050 } 00:13:56.050 }, 00:13:56.050 { 00:13:56.050 "method": "sock_impl_set_options", 00:13:56.050 "params": { 00:13:56.050 "impl_name": "posix", 00:13:56.050 "recv_buf_size": 2097152, 00:13:56.050 "send_buf_size": 2097152, 00:13:56.050 "enable_recv_pipe": true, 00:13:56.050 "enable_quickack": false, 00:13:56.050 "enable_placement_id": 0, 00:13:56.050 "enable_zerocopy_send_server": true, 00:13:56.050 "enable_zerocopy_send_client": false, 00:13:56.050 "zerocopy_threshold": 0, 00:13:56.050 "tls_version": 0, 00:13:56.050 "enable_ktls": false 00:13:56.050 } 00:13:56.050 }, 00:13:56.050 { 00:13:56.050 "method": "sock_impl_set_options", 00:13:56.050 "params": { 00:13:56.050 "impl_name": "uring", 00:13:56.050 "recv_buf_size": 2097152, 00:13:56.050 "send_buf_size": 2097152, 00:13:56.050 "enable_recv_pipe": true, 00:13:56.050 "enable_quickack": false, 00:13:56.050 "enable_placement_id": 0, 00:13:56.050 "enable_zerocopy_send_server": false, 00:13:56.050 "enable_zerocopy_send_client": false, 00:13:56.050 "zerocopy_threshold": 0, 00:13:56.050 "tls_version": 0, 00:13:56.050 "enable_ktls": false 00:13:56.050 } 00:13:56.050 } 00:13:56.050 ] 00:13:56.050 }, 00:13:56.050 { 00:13:56.050 "subsystem": "vmd", 00:13:56.050 "config": [] 00:13:56.050 }, 00:13:56.050 { 00:13:56.050 "subsystem": "accel", 00:13:56.050 "config": [ 00:13:56.050 { 00:13:56.050 "method": "accel_set_options", 00:13:56.050 "params": { 00:13:56.050 "small_cache_size": 128, 00:13:56.050 "large_cache_size": 16, 00:13:56.050 "task_count": 2048, 00:13:56.050 "sequence_count": 2048, 00:13:56.050 "buf_count": 2048 00:13:56.050 } 00:13:56.050 } 00:13:56.050 ] 00:13:56.050 }, 00:13:56.050 { 00:13:56.050 "subsystem": "bdev", 00:13:56.050 "config": [ 00:13:56.050 { 00:13:56.050 "method": "bdev_set_options", 00:13:56.050 "params": { 00:13:56.050 "bdev_io_pool_size": 65535, 00:13:56.050 "bdev_io_cache_size": 256, 00:13:56.050 "bdev_auto_examine": true, 00:13:56.050 "iobuf_small_cache_size": 128, 00:13:56.050 "iobuf_large_cache_size": 16 00:13:56.050 } 00:13:56.050 }, 00:13:56.050 { 00:13:56.050 "method": "bdev_raid_set_options", 00:13:56.050 "params": { 00:13:56.050 "process_window_size_kb": 1024, 00:13:56.050 "process_max_bandwidth_mb_sec": 0 00:13:56.050 } 00:13:56.050 }, 00:13:56.050 { 00:13:56.050 "method": "bdev_iscsi_set_options", 00:13:56.050 "params": { 00:13:56.050 "timeout_sec": 30 00:13:56.050 } 00:13:56.050 }, 00:13:56.050 { 00:13:56.050 "method": "bdev_nvme_set_options", 00:13:56.050 "params": { 00:13:56.050 "action_on_timeout": "none", 00:13:56.050 "timeout_us": 0, 00:13:56.050 "timeout_admin_us": 0, 00:13:56.050 "keep_alive_timeout_ms": 10000, 00:13:56.050 "arbitration_burst": 0, 00:13:56.050 "low_priority_weight": 0, 00:13:56.050 "medium_priority_weight": 0, 00:13:56.050 "high_priority_weight": 0, 00:13:56.050 "nvme_adminq_poll_period_us": 10000, 00:13:56.050 "nvme_ioq_poll_period_us": 0, 00:13:56.050 "io_queue_requests": 0, 00:13:56.050 "delay_cmd_submit": true, 00:13:56.050 "transport_retry_count": 4, 00:13:56.050 "bdev_retry_count": 3, 00:13:56.050 "transport_ack_timeout": 0, 00:13:56.050 "ctrlr_loss_timeout_sec": 0, 00:13:56.050 "reconnect_delay_sec": 0, 00:13:56.050 "fast_io_fail_timeout_sec": 0, 00:13:56.050 "disable_auto_failback": false, 00:13:56.050 "generate_uuids": false, 00:13:56.050 "transport_tos": 0, 00:13:56.050 "nvme_error_stat": false, 00:13:56.050 "rdma_srq_size": 0, 00:13:56.050 "io_path_stat": false, 00:13:56.050 "allow_accel_sequence": false, 00:13:56.050 "rdma_max_cq_size": 0, 00:13:56.050 "rdma_cm_event_timeout_ms": 0, 00:13:56.050 "dhchap_digests": [ 00:13:56.050 "sha256", 00:13:56.050 "sha384", 00:13:56.050 "sha512" 00:13:56.050 ], 00:13:56.050 "dhchap_dhgroups": [ 00:13:56.050 "null", 00:13:56.050 "ffdhe2048", 00:13:56.050 "ffdhe3072", 00:13:56.050 "ffdhe4096", 00:13:56.050 "ffdhe6144", 00:13:56.050 "ffdhe8192" 00:13:56.050 ] 00:13:56.050 } 00:13:56.050 }, 00:13:56.050 { 00:13:56.050 "method": "bdev_nvme_set_hotplug", 00:13:56.050 "params": { 00:13:56.050 "period_us": 100000, 00:13:56.050 "enable": false 00:13:56.050 } 00:13:56.050 }, 00:13:56.050 { 00:13:56.050 "method": "bdev_malloc_create", 00:13:56.050 "params": { 00:13:56.050 "name": "malloc0", 00:13:56.050 "num_blocks": 8192, 00:13:56.050 "block_size": 4096, 00:13:56.050 "physical_block_size": 4096, 00:13:56.050 "uuid": "5e29ce83-3830-4c0b-86aa-b04da827b495", 00:13:56.050 "optimal_io_boundary": 0, 00:13:56.050 "md_size": 0, 00:13:56.050 "dif_type": 0, 00:13:56.050 "dif_is_head_of_md": false, 00:13:56.050 "dif_pi_format": 0 00:13:56.050 } 00:13:56.050 }, 00:13:56.050 { 00:13:56.050 "method": "bdev_wait_for_examine" 00:13:56.050 } 00:13:56.050 ] 00:13:56.050 }, 00:13:56.050 { 00:13:56.050 "subsystem": "nbd", 00:13:56.050 "config": [] 00:13:56.050 }, 00:13:56.050 { 00:13:56.050 "subsystem": "scheduler", 00:13:56.050 "config": [ 00:13:56.050 { 00:13:56.050 "method": "framework_set_scheduler", 00:13:56.050 "params": { 00:13:56.050 "name": "static" 00:13:56.050 } 00:13:56.051 } 00:13:56.051 ] 00:13:56.051 }, 00:13:56.051 { 00:13:56.051 "subsystem": "nvmf", 00:13:56.051 "config": [ 00:13:56.051 { 00:13:56.051 "method": "nvmf_set_config", 00:13:56.051 "params": { 00:13:56.051 "discovery_filter": "match_any", 00:13:56.051 "admin_cmd_passthru": { 00:13:56.051 "identify_ctrlr": false 00:13:56.051 }, 00:13:56.051 "dhchap_digests": [ 00:13:56.051 "sha256", 00:13:56.051 "sha384", 00:13:56.051 "sha512" 00:13:56.051 ], 00:13:56.051 "dhchap_dhgroups": [ 00:13:56.051 "null", 00:13:56.051 "ffdhe2048", 00:13:56.051 "ffdhe3072", 00:13:56.051 "ffdhe4096", 00:13:56.051 "ffdhe6144", 00:13:56.051 "ffdhe8192" 00:13:56.051 ] 00:13:56.051 } 00:13:56.051 }, 00:13:56.051 { 00:13:56.051 "method": "nvmf_set_max_subsystems", 00:13:56.051 "params": { 00:13:56.051 "max_subsystems": 1024 00:13:56.051 } 00:13:56.051 }, 00:13:56.051 { 00:13:56.051 "method": "nvmf_set_crdt", 00:13:56.051 "params": { 00:13:56.051 "crdt1": 0, 00:13:56.051 "crdt2": 0, 00:13:56.051 "crdt3": 0 00:13:56.051 } 00:13:56.051 }, 00:13:56.051 { 00:13:56.051 "method": "nvmf_create_transport", 00:13:56.051 "params": { 00:13:56.051 "trtype": "TCP", 00:13:56.051 "max_queue_depth": 128, 00:13:56.051 "max_io_qpairs_per_ctrlr": 127, 00:13:56.051 "in_capsule_data_size": 4096, 00:13:56.051 "max_io_size": 131072, 00:13:56.051 "io_unit_size": 131072, 00:13:56.051 "max_aq_depth": 128, 00:13:56.051 "num_shared_buffers": 511, 00:13:56.051 "buf_cache_size": 4294967295, 00:13:56.051 "dif_insert_or_strip": false, 00:13:56.051 "zcopy": false, 00:13:56.051 "c2h_success": false, 00:13:56.051 "sock_priority": 0, 00:13:56.051 "abort_timeout_sec": 1, 00:13:56.051 "ack_timeout": 0, 00:13:56.051 "data_wr_pool_size": 0 00:13:56.051 } 00:13:56.051 }, 00:13:56.051 { 00:13:56.051 "method": "nvmf_create_subsystem", 00:13:56.051 "params": { 00:13:56.051 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:56.051 "allow_any_host": false, 00:13:56.051 "serial_number": "SPDK00000000000001", 00:13:56.051 "model_number": "SPDK bdev Controller", 00:13:56.051 "max_namespaces": 10, 00:13:56.051 "min_cntlid": 1, 00:13:56.051 "max_cntlid": 65519, 00:13:56.051 "ana_reporting": false 00:13:56.051 } 00:13:56.051 }, 00:13:56.051 { 00:13:56.051 "method": "nvmf_subsystem_add_host", 00:13:56.051 "params": { 00:13:56.051 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:56.051 "host": "nqn.2016-06.io.spdk:host1", 00:13:56.051 "psk": "key0" 00:13:56.051 } 00:13:56.051 }, 00:13:56.051 { 00:13:56.051 "method": "nvmf_subsystem_add_ns", 00:13:56.051 "params": { 00:13:56.051 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:56.051 "namespace": { 00:13:56.051 "nsid": 1, 00:13:56.051 "bdev_name": "malloc0", 00:13:56.051 "nguid": "5E29CE8338304C0B86AAB04DA827B495", 00:13:56.051 "uuid": "5e29ce83-3830-4c0b-86aa-b04da827b495", 00:13:56.051 "no_auto_visible": false 00:13:56.051 } 00:13:56.051 } 00:13:56.051 }, 00:13:56.051 { 00:13:56.051 "method": "nvmf_subsystem_add_listener", 00:13:56.051 "params": { 00:13:56.051 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:56.051 "listen_address": { 00:13:56.051 "trtype": "TCP", 00:13:56.051 "adrfam": "IPv4", 00:13:56.051 "traddr": "10.0.0.3", 00:13:56.051 "trsvcid": "4420" 00:13:56.051 }, 00:13:56.051 "secure_channel": true 00:13:56.051 } 00:13:56.051 } 00:13:56.051 ] 00:13:56.051 } 00:13:56.051 ] 00:13:56.051 }' 00:13:56.051 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:56.310 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:13:56.310 "subsystems": [ 00:13:56.310 { 00:13:56.310 "subsystem": "keyring", 00:13:56.310 "config": [ 00:13:56.310 { 00:13:56.310 "method": "keyring_file_add_key", 00:13:56.310 "params": { 00:13:56.310 "name": "key0", 00:13:56.310 "path": "/tmp/tmp.fVGwUt2wMd" 00:13:56.310 } 00:13:56.310 } 00:13:56.310 ] 00:13:56.310 }, 00:13:56.310 { 00:13:56.310 "subsystem": "iobuf", 00:13:56.310 "config": [ 00:13:56.310 { 00:13:56.310 "method": "iobuf_set_options", 00:13:56.310 "params": { 00:13:56.310 "small_pool_count": 8192, 00:13:56.310 "large_pool_count": 1024, 00:13:56.310 "small_bufsize": 8192, 00:13:56.310 "large_bufsize": 135168, 00:13:56.310 "enable_numa": false 00:13:56.310 } 00:13:56.310 } 00:13:56.310 ] 00:13:56.310 }, 00:13:56.310 { 00:13:56.310 "subsystem": "sock", 00:13:56.310 "config": [ 00:13:56.310 { 00:13:56.310 "method": "sock_set_default_impl", 00:13:56.310 "params": { 00:13:56.310 "impl_name": "uring" 00:13:56.310 } 00:13:56.310 }, 00:13:56.311 { 00:13:56.311 "method": "sock_impl_set_options", 00:13:56.311 "params": { 00:13:56.311 "impl_name": "ssl", 00:13:56.311 "recv_buf_size": 4096, 00:13:56.311 "send_buf_size": 4096, 00:13:56.311 "enable_recv_pipe": true, 00:13:56.311 "enable_quickack": false, 00:13:56.311 "enable_placement_id": 0, 00:13:56.311 "enable_zerocopy_send_server": true, 00:13:56.311 "enable_zerocopy_send_client": false, 00:13:56.311 "zerocopy_threshold": 0, 00:13:56.311 "tls_version": 0, 00:13:56.311 "enable_ktls": false 00:13:56.311 } 00:13:56.311 }, 00:13:56.311 { 00:13:56.311 "method": "sock_impl_set_options", 00:13:56.311 "params": { 00:13:56.311 "impl_name": "posix", 00:13:56.311 "recv_buf_size": 2097152, 00:13:56.311 "send_buf_size": 2097152, 00:13:56.311 "enable_recv_pipe": true, 00:13:56.311 "enable_quickack": false, 00:13:56.311 "enable_placement_id": 0, 00:13:56.311 "enable_zerocopy_send_server": true, 00:13:56.311 "enable_zerocopy_send_client": false, 00:13:56.311 "zerocopy_threshold": 0, 00:13:56.311 "tls_version": 0, 00:13:56.311 "enable_ktls": false 00:13:56.311 } 00:13:56.311 }, 00:13:56.311 { 00:13:56.311 "method": "sock_impl_set_options", 00:13:56.311 "params": { 00:13:56.311 "impl_name": "uring", 00:13:56.311 "recv_buf_size": 2097152, 00:13:56.311 "send_buf_size": 2097152, 00:13:56.311 "enable_recv_pipe": true, 00:13:56.311 "enable_quickack": false, 00:13:56.311 "enable_placement_id": 0, 00:13:56.311 "enable_zerocopy_send_server": false, 00:13:56.311 "enable_zerocopy_send_client": false, 00:13:56.311 "zerocopy_threshold": 0, 00:13:56.311 "tls_version": 0, 00:13:56.311 "enable_ktls": false 00:13:56.311 } 00:13:56.311 } 00:13:56.311 ] 00:13:56.311 }, 00:13:56.311 { 00:13:56.311 "subsystem": "vmd", 00:13:56.311 "config": [] 00:13:56.311 }, 00:13:56.311 { 00:13:56.311 "subsystem": "accel", 00:13:56.311 "config": [ 00:13:56.311 { 00:13:56.311 "method": "accel_set_options", 00:13:56.311 "params": { 00:13:56.311 "small_cache_size": 128, 00:13:56.311 "large_cache_size": 16, 00:13:56.311 "task_count": 2048, 00:13:56.311 "sequence_count": 2048, 00:13:56.311 "buf_count": 2048 00:13:56.311 } 00:13:56.311 } 00:13:56.311 ] 00:13:56.311 }, 00:13:56.311 { 00:13:56.311 "subsystem": "bdev", 00:13:56.311 "config": [ 00:13:56.311 { 00:13:56.311 "method": "bdev_set_options", 00:13:56.311 "params": { 00:13:56.311 "bdev_io_pool_size": 65535, 00:13:56.311 "bdev_io_cache_size": 256, 00:13:56.311 "bdev_auto_examine": true, 00:13:56.311 "iobuf_small_cache_size": 128, 00:13:56.311 "iobuf_large_cache_size": 16 00:13:56.311 } 00:13:56.311 }, 00:13:56.311 { 00:13:56.311 "method": "bdev_raid_set_options", 00:13:56.311 "params": { 00:13:56.311 "process_window_size_kb": 1024, 00:13:56.311 "process_max_bandwidth_mb_sec": 0 00:13:56.311 } 00:13:56.311 }, 00:13:56.311 { 00:13:56.311 "method": "bdev_iscsi_set_options", 00:13:56.311 "params": { 00:13:56.311 "timeout_sec": 30 00:13:56.311 } 00:13:56.311 }, 00:13:56.311 { 00:13:56.311 "method": "bdev_nvme_set_options", 00:13:56.311 "params": { 00:13:56.311 "action_on_timeout": "none", 00:13:56.311 "timeout_us": 0, 00:13:56.311 "timeout_admin_us": 0, 00:13:56.311 "keep_alive_timeout_ms": 10000, 00:13:56.311 "arbitration_burst": 0, 00:13:56.311 "low_priority_weight": 0, 00:13:56.311 "medium_priority_weight": 0, 00:13:56.311 "high_priority_weight": 0, 00:13:56.311 "nvme_adminq_poll_period_us": 10000, 00:13:56.311 "nvme_ioq_poll_period_us": 0, 00:13:56.311 "io_queue_requests": 512, 00:13:56.311 "delay_cmd_submit": true, 00:13:56.311 "transport_retry_count": 4, 00:13:56.311 "bdev_retry_count": 3, 00:13:56.311 "transport_ack_timeout": 0, 00:13:56.311 "ctrlr_loss_timeout_sec": 0, 00:13:56.311 "reconnect_delay_sec": 0, 00:13:56.311 "fast_io_fail_timeout_sec": 0, 00:13:56.311 "disable_auto_failback": false, 00:13:56.311 "generate_uuids": false, 00:13:56.311 "transport_tos": 0, 00:13:56.311 "nvme_error_stat": false, 00:13:56.311 "rdma_srq_size": 0, 00:13:56.311 "io_path_stat": false, 00:13:56.311 "allow_accel_sequence": false, 00:13:56.311 "rdma_max_cq_size": 0, 00:13:56.311 "rdma_cm_event_timeout_ms": 0, 00:13:56.311 "dhchap_digests": [ 00:13:56.311 "sha256", 00:13:56.311 "sha384", 00:13:56.311 "sha512" 00:13:56.311 ], 00:13:56.311 "dhchap_dhgroups": [ 00:13:56.311 "null", 00:13:56.311 "ffdhe2048", 00:13:56.311 "ffdhe3072", 00:13:56.311 "ffdhe4096", 00:13:56.311 "ffdhe6144", 00:13:56.311 "ffdhe8192" 00:13:56.311 ] 00:13:56.311 } 00:13:56.311 }, 00:13:56.311 { 00:13:56.311 "method": "bdev_nvme_attach_controller", 00:13:56.311 "params": { 00:13:56.311 "name": "TLSTEST", 00:13:56.311 "trtype": "TCP", 00:13:56.311 "adrfam": "IPv4", 00:13:56.311 "traddr": "10.0.0.3", 00:13:56.311 "trsvcid": "4420", 00:13:56.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:56.311 "prchk_reftag": false, 00:13:56.311 "prchk_guard": false, 00:13:56.311 "ctrlr_loss_timeout_sec": 0, 00:13:56.311 "reconnect_delay_sec": 0, 00:13:56.311 "fast_io_fail_timeout_sec": 0, 00:13:56.311 "psk": "key0", 00:13:56.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:56.311 "hdgst": false, 00:13:56.311 "ddgst": false, 00:13:56.311 "multipath": "multipath" 00:13:56.311 } 00:13:56.311 }, 00:13:56.311 { 00:13:56.311 "method": "bdev_nvme_set_hotplug", 00:13:56.311 "params": { 00:13:56.311 "period_us": 100000, 00:13:56.311 "enable": false 00:13:56.311 } 00:13:56.311 }, 00:13:56.311 { 00:13:56.311 "method": "bdev_wait_for_examine" 00:13:56.311 } 00:13:56.311 ] 00:13:56.311 }, 00:13:56.311 { 00:13:56.311 "subsystem": "nbd", 00:13:56.311 "config": [] 00:13:56.311 } 00:13:56.311 ] 00:13:56.311 }' 00:13:56.311 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72188 00:13:56.311 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72188 ']' 00:13:56.311 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72188 00:13:56.311 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:56.311 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:56.311 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72188 00:13:56.311 killing process with pid 72188 00:13:56.311 Received shutdown signal, test time was about 10.000000 seconds 00:13:56.311 00:13:56.311 Latency(us) 00:13:56.311 [2024-11-26T20:34:56.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.311 [2024-11-26T20:34:56.666Z] =================================================================================================================== 00:13:56.311 [2024-11-26T20:34:56.666Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:56.311 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:56.311 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:56.311 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72188' 00:13:56.311 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72188 00:13:56.311 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72188 00:13:56.311 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72139 00:13:56.311 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72139 ']' 00:13:56.311 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72139 00:13:56.311 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:56.311 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:56.311 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72139 00:13:56.570 killing process with pid 72139 00:13:56.570 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:56.570 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:56.570 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72139' 00:13:56.570 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72139 00:13:56.570 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72139 00:13:56.829 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:56.829 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:56.829 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:13:56.829 "subsystems": [ 00:13:56.829 { 00:13:56.829 "subsystem": "keyring", 00:13:56.829 "config": [ 00:13:56.829 { 00:13:56.829 "method": "keyring_file_add_key", 00:13:56.829 "params": { 00:13:56.829 "name": "key0", 00:13:56.829 "path": "/tmp/tmp.fVGwUt2wMd" 00:13:56.829 } 00:13:56.829 } 00:13:56.829 ] 00:13:56.829 }, 00:13:56.829 { 00:13:56.829 "subsystem": "iobuf", 00:13:56.829 "config": [ 00:13:56.829 { 00:13:56.829 "method": "iobuf_set_options", 00:13:56.829 "params": { 00:13:56.829 "small_pool_count": 8192, 00:13:56.829 "large_pool_count": 1024, 00:13:56.829 "small_bufsize": 8192, 00:13:56.829 "large_bufsize": 135168, 00:13:56.829 "enable_numa": false 00:13:56.829 } 00:13:56.829 } 00:13:56.829 ] 00:13:56.829 }, 00:13:56.829 { 00:13:56.829 "subsystem": "sock", 00:13:56.829 "config": [ 00:13:56.829 { 00:13:56.829 "method": "sock_set_default_impl", 00:13:56.829 "params": { 00:13:56.829 "impl_name": "uring" 00:13:56.829 } 00:13:56.829 }, 00:13:56.829 { 00:13:56.829 "method": "sock_impl_set_options", 00:13:56.829 "params": { 00:13:56.829 "impl_name": "ssl", 00:13:56.829 "recv_buf_size": 4096, 00:13:56.829 "send_buf_size": 4096, 00:13:56.829 "enable_recv_pipe": true, 00:13:56.829 "enable_quickack": false, 00:13:56.829 "enable_placement_id": 0, 00:13:56.829 "enable_zerocopy_send_server": true, 00:13:56.829 "enable_zerocopy_send_client": false, 00:13:56.829 "zerocopy_threshold": 0, 00:13:56.829 "tls_version": 0, 00:13:56.829 "enable_ktls": false 00:13:56.829 } 00:13:56.829 }, 00:13:56.829 { 00:13:56.829 "method": "sock_impl_set_options", 00:13:56.829 "params": { 00:13:56.829 "impl_name": "posix", 00:13:56.829 "recv_buf_size": 2097152, 00:13:56.829 "send_buf_size": 2097152, 00:13:56.829 "enable_recv_pipe": true, 00:13:56.829 "enable_quickack": false, 00:13:56.829 "enable_placement_id": 0, 00:13:56.829 "enable_zerocopy_send_server": true, 00:13:56.829 "enable_zerocopy_send_client": false, 00:13:56.829 "zerocopy_threshold": 0, 00:13:56.829 "tls_version": 0, 00:13:56.829 "enable_ktls": false 00:13:56.829 } 00:13:56.829 }, 00:13:56.829 { 00:13:56.829 "method": "sock_impl_set_options", 00:13:56.829 "params": { 00:13:56.829 "impl_name": "uring", 00:13:56.829 "recv_buf_size": 2097152, 00:13:56.829 "send_buf_size": 2097152, 00:13:56.829 "enable_recv_pipe": true, 00:13:56.829 "enable_quickack": false, 00:13:56.829 "enable_placement_id": 0, 00:13:56.829 "enable_zerocopy_send_server": false, 00:13:56.829 "enable_zerocopy_send_client": false, 00:13:56.829 "zerocopy_threshold": 0, 00:13:56.829 "tls_version": 0, 00:13:56.829 "enable_ktls": false 00:13:56.829 } 00:13:56.829 } 00:13:56.829 ] 00:13:56.829 }, 00:13:56.829 { 00:13:56.829 "subsystem": "vmd", 00:13:56.829 "config": [] 00:13:56.829 }, 00:13:56.830 { 00:13:56.830 "subsystem": "accel", 00:13:56.830 "config": [ 00:13:56.830 { 00:13:56.830 "method": "accel_set_options", 00:13:56.830 "params": { 00:13:56.830 "small_cache_size": 128, 00:13:56.830 "large_cache_size": 16, 00:13:56.830 "task_count": 2048, 00:13:56.830 "sequence_count": 2048, 00:13:56.830 "buf_count": 2048 00:13:56.830 } 00:13:56.830 } 00:13:56.830 ] 00:13:56.830 }, 00:13:56.830 { 00:13:56.830 "subsystem": "bdev", 00:13:56.830 "config": [ 00:13:56.830 { 00:13:56.830 "method": "bdev_set_options", 00:13:56.830 "params": { 00:13:56.830 "bdev_io_pool_size": 65535, 00:13:56.830 "bdev_io_cache_size": 256, 00:13:56.830 "bdev_auto_examine": true, 00:13:56.830 "iobuf_small_cache_size": 128, 00:13:56.830 "iobuf_large_cache_size": 16 00:13:56.830 } 00:13:56.830 }, 00:13:56.830 { 00:13:56.830 "method": "bdev_raid_set_options", 00:13:56.830 "params": { 00:13:56.830 "process_window_size_kb": 1024, 00:13:56.830 "process_max_bandwidth_mb_sec": 0 00:13:56.830 } 00:13:56.830 }, 00:13:56.830 { 00:13:56.830 "method": "bdev_iscsi_set_options", 00:13:56.830 "params": { 00:13:56.830 "timeout_sec": 30 00:13:56.830 } 00:13:56.830 }, 00:13:56.830 { 00:13:56.830 "method": "bdev_nvme_set_options", 00:13:56.830 "params": { 00:13:56.830 "action_on_timeout": "none", 00:13:56.830 "timeout_us": 0, 00:13:56.830 "timeout_admin_us": 0, 00:13:56.830 "keep_alive_timeout_ms": 10000, 00:13:56.830 "arbitration_burst": 0, 00:13:56.830 "low_priority_weight": 0, 00:13:56.830 "medium_priority_weight": 0, 00:13:56.830 "high_priority_weight": 0, 00:13:56.830 "nvme_adminq_poll_period_us": 10000, 00:13:56.830 "nvme_ioq_poll_period_us": 0, 00:13:56.830 "io_queue_requests": 0, 00:13:56.830 "delay_cmd_submit": true, 00:13:56.830 "transport_retry_count": 4, 00:13:56.830 "bdev_retry_count": 3, 00:13:56.830 "transport_ack_timeout": 0, 00:13:56.830 "ctrlr_loss_timeout_sec": 0, 00:13:56.830 "reconnect_delay_sec": 0, 00:13:56.830 "fast_io_fail_timeout_sec": 0, 00:13:56.830 "disable_auto_failback": false, 00:13:56.830 "generate_uuids": false, 00:13:56.830 "transport_tos": 0, 00:13:56.830 "nvme_error_stat": false, 00:13:56.830 "rdma_srq_size": 0, 00:13:56.830 "io_path_stat": false, 00:13:56.830 "allow_accel_sequence": false, 00:13:56.830 "rdma_max_cq_size": 0, 00:13:56.830 "rdma_cm_event_timeout_ms": 0, 00:13:56.830 "dhchap_digests": [ 00:13:56.830 "sha256", 00:13:56.830 "sha384", 00:13:56.830 "sha512" 00:13:56.830 ], 00:13:56.830 "dhchap_dhgroups": [ 00:13:56.830 "null", 00:13:56.830 "ffdhe2048", 00:13:56.830 "ffdhe3072", 00:13:56.830 "ffdhe4096", 00:13:56.830 "ffdhe6144", 00:13:56.830 "ffdhe8192" 00:13:56.830 ] 00:13:56.830 } 00:13:56.830 }, 00:13:56.830 { 00:13:56.830 "method": "bdev_nvme_set_hotplug", 00:13:56.830 "params": { 00:13:56.830 "period_us": 100000, 00:13:56.830 "enable": false 00:13:56.830 } 00:13:56.830 }, 00:13:56.830 { 00:13:56.830 "method": "bdev_malloc_create", 00:13:56.830 "params": { 00:13:56.830 "name": "malloc0", 00:13:56.830 "num_blocks": 8192, 00:13:56.830 "block_size": 4096, 00:13:56.830 "physical_block_size": 4096, 00:13:56.830 "uuid": "5e29ce83-3830-4c0b-86aa-b04da827b495", 00:13:56.830 "optimal_io_boundary": 0, 00:13:56.830 "md_size": 0, 00:13:56.830 "dif_type": 0, 00:13:56.830 "dif_is_head_of_md": false, 00:13:56.830 "dif_pi_format": 0 00:13:56.830 } 00:13:56.830 }, 00:13:56.830 { 00:13:56.830 "method": "bdev_wait_for_examine" 00:13:56.830 } 00:13:56.830 ] 00:13:56.830 }, 00:13:56.830 { 00:13:56.830 "subsystem": "nbd", 00:13:56.830 "config": [] 00:13:56.830 }, 00:13:56.830 { 00:13:56.830 "subsystem": "scheduler", 00:13:56.830 "config": [ 00:13:56.830 { 00:13:56.830 "method": "framework_set_scheduler", 00:13:56.830 "params": { 00:13:56.830 "name": "static" 00:13:56.830 } 00:13:56.830 } 00:13:56.830 ] 00:13:56.830 }, 00:13:56.830 { 00:13:56.830 "subsystem": "nvmf", 00:13:56.830 "config": [ 00:13:56.830 { 00:13:56.830 "method": "nvmf_set_config", 00:13:56.830 "params": { 00:13:56.830 "discovery_filter": "match_any", 00:13:56.830 "admin_cmd_passthru": { 00:13:56.830 "identify_ctrlr": false 00:13:56.830 }, 00:13:56.830 "dhchap_digests": [ 00:13:56.830 "sha256", 00:13:56.830 "sha384", 00:13:56.830 "sha512" 00:13:56.830 ], 00:13:56.830 "dhchap_dhgroups": [ 00:13:56.830 "null", 00:13:56.830 "ffdhe2048", 00:13:56.830 "ffdhe3072", 00:13:56.830 "ffdhe4096", 00:13:56.830 "ffdhe6144", 00:13:56.830 "ffdhe8192" 00:13:56.830 ] 00:13:56.830 } 00:13:56.830 }, 00:13:56.830 { 00:13:56.830 "method": "nvmf_set_max_subsystems", 00:13:56.830 "params": { 00:13:56.830 "max_subsystems": 1024 00:13:56.830 } 00:13:56.830 }, 00:13:56.830 { 00:13:56.830 "method": "nvmf_set_crdt", 00:13:56.830 "params": { 00:13:56.830 "crdt1": 0, 00:13:56.830 "crdt2": 0, 00:13:56.830 "crdt3": 0 00:13:56.830 } 00:13:56.830 }, 00:13:56.830 { 00:13:56.830 "method": "nvmf_create_transport", 00:13:56.830 "params": { 00:13:56.830 "trtype": "TCP", 00:13:56.830 "max_queue_depth": 128, 00:13:56.830 "max_io_qpairs_per_ctrlr": 127, 00:13:56.830 "in_capsule_data_size": 4096, 00:13:56.830 "max_io_size": 131072, 00:13:56.830 "io_unit_size": 131072, 00:13:56.830 "max_aq_depth": 128, 00:13:56.830 "num_shared_buffers": 511, 00:13:56.830 "buf_cache_size": 4294967295, 00:13:56.830 "dif_insert_or_strip": false, 00:13:56.830 "zcopy": false, 00:13:56.830 "c2h_success": false, 00:13:56.830 "sock_priority": 0, 00:13:56.830 "abort_timeout_sec": 1, 00:13:56.830 "ack_timeout": 0, 00:13:56.830 "data_wr_pool_size": 0 00:13:56.830 } 00:13:56.830 }, 00:13:56.830 { 00:13:56.830 "method": "nvmf_create_subsystem", 00:13:56.830 "params": { 00:13:56.830 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:56.830 "allow_any_host": false, 00:13:56.830 "serial_number": "SPDK00000000000001", 00:13:56.830 "model_number": "SPDK bdev Controller", 00:13:56.830 "max_namespaces": 10, 00:13:56.830 "min_cntlid": 1, 00:13:56.830 "max_cntlid": 65519, 00:13:56.830 "ana_reporting": false 00:13:56.830 } 00:13:56.830 }, 00:13:56.830 { 00:13:56.830 "method": "nvmf_subsystem_add_host", 00:13:56.830 "params": { 00:13:56.830 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:56.830 "host": "nqn.2016-06.io.spdk:host1", 00:13:56.830 "psk": "key0" 00:13:56.830 } 00:13:56.830 }, 00:13:56.830 { 00:13:56.830 "method": "nvmf_subsystem_add_ns", 00:13:56.830 "params": { 00:13:56.830 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:56.830 "namespace": { 00:13:56.830 "nsid": 1, 00:13:56.830 "bdev_name": "malloc0", 00:13:56.830 "nguid": "5E29CE8338304C0B86AAB04DA827B495", 00:13:56.830 "uuid": "5e29ce83-3830-4c0b-86aa-b04da827b495", 00:13:56.830 "no_auto_visible": false 00:13:56.830 } 00:13:56.830 } 00:13:56.830 }, 00:13:56.830 { 00:13:56.830 "method": "nvmf_subsystem_add_listener", 00:13:56.830 "params": { 00:13:56.830 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:56.830 "listen_address": { 00:13:56.830 "trtype": "TCP", 00:13:56.830 "adrfam": "IPv4", 00:13:56.830 "traddr": "10.0.0.3", 00:13:56.830 "trsvcid": "4420" 00:13:56.830 }, 00:13:56.830 "secure_channel": true 00:13:56.830 } 00:13:56.830 } 00:13:56.830 ] 00:13:56.830 } 00:13:56.830 ] 00:13:56.831 }' 00:13:56.831 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:56.831 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:56.831 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72230 00:13:56.831 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:56.831 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72230 00:13:56.831 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72230 ']' 00:13:56.831 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.831 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:56.831 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.831 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:56.831 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:56.831 [2024-11-26 20:34:57.005248] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:13:56.831 [2024-11-26 20:34:57.005353] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.831 [2024-11-26 20:34:57.150411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.089 [2024-11-26 20:34:57.227120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.089 [2024-11-26 20:34:57.227196] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.089 [2024-11-26 20:34:57.227209] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.089 [2024-11-26 20:34:57.227232] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.089 [2024-11-26 20:34:57.227243] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.089 [2024-11-26 20:34:57.227772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.089 [2024-11-26 20:34:57.413485] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:57.348 [2024-11-26 20:34:57.510878] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.348 [2024-11-26 20:34:57.542813] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:57.348 [2024-11-26 20:34:57.543304] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:57.914 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:57.914 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:57.914 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:57.914 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:57.914 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:57.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:57.914 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.914 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72264 00:13:57.914 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72264 /var/tmp/bdevperf.sock 00:13:57.914 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72264 ']' 00:13:57.914 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:57.914 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:57.914 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:57.914 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:57.914 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:13:57.914 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:57.914 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:13:57.914 "subsystems": [ 00:13:57.915 { 00:13:57.915 "subsystem": "keyring", 00:13:57.915 "config": [ 00:13:57.915 { 00:13:57.915 "method": "keyring_file_add_key", 00:13:57.915 "params": { 00:13:57.915 "name": "key0", 00:13:57.915 "path": "/tmp/tmp.fVGwUt2wMd" 00:13:57.915 } 00:13:57.915 } 00:13:57.915 ] 00:13:57.915 }, 00:13:57.915 { 00:13:57.915 "subsystem": "iobuf", 00:13:57.915 "config": [ 00:13:57.915 { 00:13:57.915 "method": "iobuf_set_options", 00:13:57.915 "params": { 00:13:57.915 "small_pool_count": 8192, 00:13:57.915 "large_pool_count": 1024, 00:13:57.915 "small_bufsize": 8192, 00:13:57.915 "large_bufsize": 135168, 00:13:57.915 "enable_numa": false 00:13:57.915 } 00:13:57.915 } 00:13:57.915 ] 00:13:57.915 }, 00:13:57.915 { 00:13:57.915 "subsystem": "sock", 00:13:57.915 "config": [ 00:13:57.915 { 00:13:57.915 "method": "sock_set_default_impl", 00:13:57.915 "params": { 00:13:57.915 "impl_name": "uring" 00:13:57.915 } 00:13:57.915 }, 00:13:57.915 { 00:13:57.915 "method": "sock_impl_set_options", 00:13:57.915 "params": { 00:13:57.915 "impl_name": "ssl", 00:13:57.915 "recv_buf_size": 4096, 00:13:57.915 "send_buf_size": 4096, 00:13:57.915 "enable_recv_pipe": true, 00:13:57.915 "enable_quickack": false, 00:13:57.915 "enable_placement_id": 0, 00:13:57.915 "enable_zerocopy_send_server": true, 00:13:57.915 "enable_zerocopy_send_client": false, 00:13:57.915 "zerocopy_threshold": 0, 00:13:57.915 "tls_version": 0, 00:13:57.915 "enable_ktls": false 00:13:57.915 } 00:13:57.915 }, 00:13:57.915 { 00:13:57.915 "method": "sock_impl_set_options", 00:13:57.915 "params": { 00:13:57.915 "impl_name": "posix", 00:13:57.915 "recv_buf_size": 2097152, 00:13:57.915 "send_buf_size": 2097152, 00:13:57.915 "enable_recv_pipe": true, 00:13:57.915 "enable_quickack": false, 00:13:57.915 "enable_placement_id": 0, 00:13:57.915 "enable_zerocopy_send_server": true, 00:13:57.915 "enable_zerocopy_send_client": false, 00:13:57.915 "zerocopy_threshold": 0, 00:13:57.915 "tls_version": 0, 00:13:57.915 "enable_ktls": false 00:13:57.915 } 00:13:57.915 }, 00:13:57.915 { 00:13:57.915 "method": "sock_impl_set_options", 00:13:57.915 "params": { 00:13:57.915 "impl_name": "uring", 00:13:57.915 "recv_buf_size": 2097152, 00:13:57.915 "send_buf_size": 2097152, 00:13:57.915 "enable_recv_pipe": true, 00:13:57.915 "enable_quickack": false, 00:13:57.915 "enable_placement_id": 0, 00:13:57.915 "enable_zerocopy_send_server": false, 00:13:57.915 "enable_zerocopy_send_client": false, 00:13:57.915 "zerocopy_threshold": 0, 00:13:57.915 "tls_version": 0, 00:13:57.915 "enable_ktls": false 00:13:57.915 } 00:13:57.915 } 00:13:57.915 ] 00:13:57.915 }, 00:13:57.915 { 00:13:57.915 "subsystem": "vmd", 00:13:57.915 "config": [] 00:13:57.915 }, 00:13:57.915 { 00:13:57.915 "subsystem": "accel", 00:13:57.915 "config": [ 00:13:57.915 { 00:13:57.915 "method": "accel_set_options", 00:13:57.915 "params": { 00:13:57.915 "small_cache_size": 128, 00:13:57.915 "large_cache_size": 16, 00:13:57.915 "task_count": 2048, 00:13:57.915 "sequence_count": 2048, 00:13:57.915 "buf_count": 2048 00:13:57.915 } 00:13:57.915 } 00:13:57.915 ] 00:13:57.915 }, 00:13:57.915 { 00:13:57.915 "subsystem": "bdev", 00:13:57.915 "config": [ 00:13:57.915 { 00:13:57.915 "method": "bdev_set_options", 00:13:57.915 "params": { 00:13:57.915 "bdev_io_pool_size": 65535, 00:13:57.915 "bdev_io_cache_size": 256, 00:13:57.915 "bdev_auto_examine": true, 00:13:57.915 "iobuf_small_cache_size": 128, 00:13:57.915 "iobuf_large_cache_size": 16 00:13:57.915 } 00:13:57.915 }, 00:13:57.915 { 00:13:57.915 "method": "bdev_raid_set_options", 00:13:57.915 "params": { 00:13:57.915 "process_window_size_kb": 1024, 00:13:57.915 "process_max_bandwidth_mb_sec": 0 00:13:57.915 } 00:13:57.915 }, 00:13:57.915 { 00:13:57.915 "method": "bdev_iscsi_set_options", 00:13:57.915 "params": { 00:13:57.915 "timeout_sec": 30 00:13:57.915 } 00:13:57.915 }, 00:13:57.915 { 00:13:57.915 "method": "bdev_nvme_set_options", 00:13:57.915 "params": { 00:13:57.915 "action_on_timeout": "none", 00:13:57.915 "timeout_us": 0, 00:13:57.915 "timeout_admin_us": 0, 00:13:57.915 "keep_alive_timeout_ms": 10000, 00:13:57.915 "arbitration_burst": 0, 00:13:57.915 "low_priority_weight": 0, 00:13:57.915 "medium_priority_weight": 0, 00:13:57.915 "high_priority_weight": 0, 00:13:57.915 "nvme_adminq_poll_period_us": 10000, 00:13:57.915 "nvme_ioq_poll_period_us": 0, 00:13:57.915 "io_queue_requests": 512, 00:13:57.915 "delay_cmd_submit": true, 00:13:57.915 "transport_retry_count": 4, 00:13:57.915 "bdev_retry_count": 3, 00:13:57.915 "transport_ack_timeout": 0, 00:13:57.915 "ctrlr_loss_timeout_sec": 0, 00:13:57.915 "reconnect_delay_sec": 0, 00:13:57.915 "fast_io_fail_timeout_sec": 0, 00:13:57.915 "disable_auto_failback": false, 00:13:57.915 "generate_uuids": false, 00:13:57.915 "transport_tos": 0, 00:13:57.915 "nvme_error_stat": false, 00:13:57.915 "rdma_srq_size": 0, 00:13:57.915 "io_path_stat": false, 00:13:57.915 "allow_accel_sequence": false, 00:13:57.915 "rdma_max_cq_size": 0, 00:13:57.915 "rdma_cm_event_timeout_ms": 0, 00:13:57.915 "dhchap_digests": [ 00:13:57.915 "sha256", 00:13:57.915 "sha384", 00:13:57.915 "sha512" 00:13:57.915 ], 00:13:57.915 "dhchap_dhgroups": [ 00:13:57.915 "null", 00:13:57.915 "ffdhe2048", 00:13:57.915 "ffdhe3072", 00:13:57.915 "ffdhe4096", 00:13:57.915 "ffdhe6144", 00:13:57.915 "ffdhe8192" 00:13:57.915 ] 00:13:57.915 } 00:13:57.915 }, 00:13:57.915 { 00:13:57.915 "method": "bdev_nvme_attach_controller", 00:13:57.915 "params": { 00:13:57.915 "name": "TLSTEST", 00:13:57.915 "trtype": "TCP", 00:13:57.915 "adrfam": "IPv4", 00:13:57.915 "traddr": "10.0.0.3", 00:13:57.915 "trsvcid": "4420", 00:13:57.915 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:57.915 "prchk_reftag": false, 00:13:57.915 "prchk_guard": false, 00:13:57.915 "ctrlr_loss_timeout_sec": 0, 00:13:57.915 "reconnect_delay_sec": 0, 00:13:57.915 "fast_io_fail_timeout_sec": 0, 00:13:57.915 "psk": "key0", 00:13:57.915 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:57.915 "hdgst": false, 00:13:57.915 "ddgst": false, 00:13:57.915 "multipath": "multipath" 00:13:57.915 } 00:13:57.915 }, 00:13:57.915 { 00:13:57.915 "method": "bdev_nvme_set_hotplug", 00:13:57.915 "params": { 00:13:57.915 "period_us": 100000, 00:13:57.915 "enable": false 00:13:57.915 } 00:13:57.915 }, 00:13:57.915 { 00:13:57.915 "method": "bdev_wait_for_examine" 00:13:57.915 } 00:13:57.915 ] 00:13:57.915 }, 00:13:57.915 { 00:13:57.915 "subsystem": "nbd", 00:13:57.915 "config": [] 00:13:57.915 } 00:13:57.915 ] 00:13:57.915 }' 00:13:57.915 [2024-11-26 20:34:58.097765] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:13:57.915 [2024-11-26 20:34:58.097869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72264 ] 00:13:57.915 [2024-11-26 20:34:58.247857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.173 [2024-11-26 20:34:58.318671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.173 [2024-11-26 20:34:58.455170] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:58.173 [2024-11-26 20:34:58.505701] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:58.737 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.737 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:58.737 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:58.995 Running I/O for 10 seconds... 00:14:00.892 3746.00 IOPS, 14.63 MiB/s [2024-11-26T20:35:02.180Z] 3806.50 IOPS, 14.87 MiB/s [2024-11-26T20:35:03.556Z] 3808.33 IOPS, 14.88 MiB/s [2024-11-26T20:35:04.492Z] 3822.50 IOPS, 14.93 MiB/s [2024-11-26T20:35:05.426Z] 3828.60 IOPS, 14.96 MiB/s [2024-11-26T20:35:06.361Z] 3835.83 IOPS, 14.98 MiB/s [2024-11-26T20:35:07.295Z] 3795.57 IOPS, 14.83 MiB/s [2024-11-26T20:35:08.229Z] 3829.00 IOPS, 14.96 MiB/s [2024-11-26T20:35:09.605Z] 3861.33 IOPS, 15.08 MiB/s [2024-11-26T20:35:09.605Z] 3885.00 IOPS, 15.18 MiB/s 00:14:09.250 Latency(us) 00:14:09.250 [2024-11-26T20:35:09.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.251 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:09.251 Verification LBA range: start 0x0 length 0x2000 00:14:09.251 TLSTESTn1 : 10.02 3891.39 15.20 0.00 0.00 32833.94 5600.35 31218.97 00:14:09.251 [2024-11-26T20:35:09.606Z] =================================================================================================================== 00:14:09.251 [2024-11-26T20:35:09.606Z] Total : 3891.39 15.20 0.00 0.00 32833.94 5600.35 31218.97 00:14:09.251 { 00:14:09.251 "results": [ 00:14:09.251 { 00:14:09.251 "job": "TLSTESTn1", 00:14:09.251 "core_mask": "0x4", 00:14:09.251 "workload": "verify", 00:14:09.251 "status": "finished", 00:14:09.251 "verify_range": { 00:14:09.251 "start": 0, 00:14:09.251 "length": 8192 00:14:09.251 }, 00:14:09.251 "queue_depth": 128, 00:14:09.251 "io_size": 4096, 00:14:09.251 "runtime": 10.016226, 00:14:09.251 "iops": 3891.3858373403314, 00:14:09.251 "mibps": 15.20072592711067, 00:14:09.251 "io_failed": 0, 00:14:09.251 "io_timeout": 0, 00:14:09.251 "avg_latency_us": 32833.94305210299, 00:14:09.251 "min_latency_us": 5600.349090909091, 00:14:09.251 "max_latency_us": 31218.967272727274 00:14:09.251 } 00:14:09.251 ], 00:14:09.251 "core_count": 1 00:14:09.251 } 00:14:09.251 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:09.251 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72264 00:14:09.251 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72264 ']' 00:14:09.251 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72264 00:14:09.251 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:09.251 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.251 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72264 00:14:09.251 killing process with pid 72264 00:14:09.251 Received shutdown signal, test time was about 10.000000 seconds 00:14:09.251 00:14:09.251 Latency(us) 00:14:09.251 [2024-11-26T20:35:09.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.251 [2024-11-26T20:35:09.606Z] =================================================================================================================== 00:14:09.251 [2024-11-26T20:35:09.606Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:09.251 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:09.251 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:09.251 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72264' 00:14:09.251 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72264 00:14:09.251 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72264 00:14:09.251 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72230 00:14:09.251 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72230 ']' 00:14:09.251 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72230 00:14:09.251 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:09.251 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.251 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72230 00:14:09.251 killing process with pid 72230 00:14:09.251 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:09.251 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:09.251 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72230' 00:14:09.251 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72230 00:14:09.251 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72230 00:14:09.510 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:14:09.510 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:09.510 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:09.510 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.510 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72403 00:14:09.510 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:09.510 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72403 00:14:09.510 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72403 ']' 00:14:09.510 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.510 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.510 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.510 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.510 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.510 [2024-11-26 20:35:09.804909] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:14:09.510 [2024-11-26 20:35:09.805354] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.768 [2024-11-26 20:35:09.961744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.768 [2024-11-26 20:35:10.021526] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.768 [2024-11-26 20:35:10.021588] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.768 [2024-11-26 20:35:10.021601] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.768 [2024-11-26 20:35:10.021609] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.768 [2024-11-26 20:35:10.021617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.768 [2024-11-26 20:35:10.022015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.768 [2024-11-26 20:35:10.074709] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:10.706 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.706 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:10.706 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:10.706 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:10.706 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:10.706 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.706 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.fVGwUt2wMd 00:14:10.706 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fVGwUt2wMd 00:14:10.706 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:10.964 [2024-11-26 20:35:11.084292] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.964 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:11.222 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:11.481 [2024-11-26 20:35:11.676421] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:11.481 [2024-11-26 20:35:11.676670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:11.481 20:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:11.785 malloc0 00:14:11.785 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:12.044 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fVGwUt2wMd 00:14:12.303 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:12.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:12.562 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72464 00:14:12.562 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:12.562 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:12.562 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72464 /var/tmp/bdevperf.sock 00:14:12.562 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72464 ']' 00:14:12.562 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:12.562 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.562 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:12.562 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.562 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:12.821 [2024-11-26 20:35:12.928144] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:14:12.821 [2024-11-26 20:35:12.928494] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72464 ] 00:14:12.821 [2024-11-26 20:35:13.072287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.821 [2024-11-26 20:35:13.130390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.080 [2024-11-26 20:35:13.187364] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:13.648 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:13.648 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:13.648 20:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fVGwUt2wMd 00:14:13.907 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:14.167 [2024-11-26 20:35:14.461446] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:14.425 nvme0n1 00:14:14.425 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:14.425 Running I/O for 1 seconds... 00:14:15.364 4038.00 IOPS, 15.77 MiB/s 00:14:15.364 Latency(us) 00:14:15.364 [2024-11-26T20:35:15.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.364 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:15.364 Verification LBA range: start 0x0 length 0x2000 00:14:15.364 nvme0n1 : 1.02 4067.38 15.89 0.00 0.00 31035.47 7566.43 21686.46 00:14:15.364 [2024-11-26T20:35:15.719Z] =================================================================================================================== 00:14:15.364 [2024-11-26T20:35:15.719Z] Total : 4067.38 15.89 0.00 0.00 31035.47 7566.43 21686.46 00:14:15.364 { 00:14:15.364 "results": [ 00:14:15.364 { 00:14:15.364 "job": "nvme0n1", 00:14:15.364 "core_mask": "0x2", 00:14:15.364 "workload": "verify", 00:14:15.364 "status": "finished", 00:14:15.364 "verify_range": { 00:14:15.364 "start": 0, 00:14:15.364 "length": 8192 00:14:15.364 }, 00:14:15.364 "queue_depth": 128, 00:14:15.364 "io_size": 4096, 00:14:15.364 "runtime": 1.024246, 00:14:15.364 "iops": 4067.382249967293, 00:14:15.364 "mibps": 15.888211913934738, 00:14:15.364 "io_failed": 0, 00:14:15.364 "io_timeout": 0, 00:14:15.364 "avg_latency_us": 31035.466901758828, 00:14:15.364 "min_latency_us": 7566.4290909090905, 00:14:15.364 "max_latency_us": 21686.458181818183 00:14:15.364 } 00:14:15.364 ], 00:14:15.364 "core_count": 1 00:14:15.364 } 00:14:15.364 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72464 00:14:15.364 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72464 ']' 00:14:15.364 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72464 00:14:15.364 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:15.364 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:15.364 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72464 00:14:15.623 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:15.623 killing process with pid 72464 00:14:15.623 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:15.623 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72464' 00:14:15.623 Received shutdown signal, test time was about 1.000000 seconds 00:14:15.623 00:14:15.623 Latency(us) 00:14:15.623 [2024-11-26T20:35:15.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.623 [2024-11-26T20:35:15.978Z] =================================================================================================================== 00:14:15.623 [2024-11-26T20:35:15.978Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:15.623 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72464 00:14:15.623 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72464 00:14:15.623 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72403 00:14:15.623 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72403 ']' 00:14:15.623 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72403 00:14:15.623 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:15.623 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:15.623 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72403 00:14:15.623 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:15.623 killing process with pid 72403 00:14:15.623 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:15.623 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72403' 00:14:15.623 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72403 00:14:15.623 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72403 00:14:15.933 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:14:15.933 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:15.933 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:15.933 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:15.933 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72510 00:14:15.933 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72510 00:14:15.933 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:15.933 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72510 ']' 00:14:15.933 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.933 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:15.933 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.933 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:15.933 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:15.933 [2024-11-26 20:35:16.248896] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:14:15.933 [2024-11-26 20:35:16.249019] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.192 [2024-11-26 20:35:16.411280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.192 [2024-11-26 20:35:16.470199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.192 [2024-11-26 20:35:16.470259] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.192 [2024-11-26 20:35:16.470272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.192 [2024-11-26 20:35:16.470280] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.192 [2024-11-26 20:35:16.470287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.192 [2024-11-26 20:35:16.470681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.192 [2024-11-26 20:35:16.523390] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:17.129 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:17.129 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:17.129 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:17.129 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:17.129 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.129 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.129 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:14:17.129 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.129 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.129 [2024-11-26 20:35:17.357390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.129 malloc0 00:14:17.129 [2024-11-26 20:35:17.392905] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:17.129 [2024-11-26 20:35:17.393134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:17.129 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:17.129 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72546 00:14:17.129 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72546 /var/tmp/bdevperf.sock 00:14:17.129 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72546 ']' 00:14:17.129 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:17.129 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:17.130 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:17.130 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:17.130 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:17.130 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.130 [2024-11-26 20:35:17.478148] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:14:17.130 [2024-11-26 20:35:17.478261] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72546 ] 00:14:17.388 [2024-11-26 20:35:17.620879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.388 [2024-11-26 20:35:17.682464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.388 [2024-11-26 20:35:17.735325] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:18.322 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:18.322 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:18.323 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fVGwUt2wMd 00:14:18.580 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:18.838 [2024-11-26 20:35:19.021746] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:18.838 nvme0n1 00:14:18.838 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:19.095 Running I/O for 1 seconds... 00:14:20.030 4128.00 IOPS, 16.12 MiB/s 00:14:20.030 Latency(us) 00:14:20.030 [2024-11-26T20:35:20.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.030 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:20.030 Verification LBA range: start 0x0 length 0x2000 00:14:20.030 nvme0n1 : 1.02 4172.92 16.30 0.00 0.00 30275.26 3425.75 23592.96 00:14:20.030 [2024-11-26T20:35:20.386Z] =================================================================================================================== 00:14:20.031 [2024-11-26T20:35:20.386Z] Total : 4172.92 16.30 0.00 0.00 30275.26 3425.75 23592.96 00:14:20.031 { 00:14:20.031 "results": [ 00:14:20.031 { 00:14:20.031 "job": "nvme0n1", 00:14:20.031 "core_mask": "0x2", 00:14:20.031 "workload": "verify", 00:14:20.031 "status": "finished", 00:14:20.031 "verify_range": { 00:14:20.031 "start": 0, 00:14:20.031 "length": 8192 00:14:20.031 }, 00:14:20.031 "queue_depth": 128, 00:14:20.031 "io_size": 4096, 00:14:20.031 "runtime": 1.02015, 00:14:20.031 "iops": 4172.915747684164, 00:14:20.031 "mibps": 16.300452139391265, 00:14:20.031 "io_failed": 0, 00:14:20.031 "io_timeout": 0, 00:14:20.031 "avg_latency_us": 30275.25943921242, 00:14:20.031 "min_latency_us": 3425.7454545454543, 00:14:20.031 "max_latency_us": 23592.96 00:14:20.031 } 00:14:20.031 ], 00:14:20.031 "core_count": 1 00:14:20.031 } 00:14:20.031 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:14:20.031 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.031 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.290 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.290 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:14:20.290 "subsystems": [ 00:14:20.290 { 00:14:20.290 "subsystem": "keyring", 00:14:20.290 "config": [ 00:14:20.290 { 00:14:20.290 "method": "keyring_file_add_key", 00:14:20.290 "params": { 00:14:20.290 "name": "key0", 00:14:20.290 "path": "/tmp/tmp.fVGwUt2wMd" 00:14:20.290 } 00:14:20.290 } 00:14:20.290 ] 00:14:20.290 }, 00:14:20.290 { 00:14:20.290 "subsystem": "iobuf", 00:14:20.290 "config": [ 00:14:20.290 { 00:14:20.290 "method": "iobuf_set_options", 00:14:20.290 "params": { 00:14:20.290 "small_pool_count": 8192, 00:14:20.290 "large_pool_count": 1024, 00:14:20.290 "small_bufsize": 8192, 00:14:20.290 "large_bufsize": 135168, 00:14:20.290 "enable_numa": false 00:14:20.290 } 00:14:20.290 } 00:14:20.290 ] 00:14:20.290 }, 00:14:20.290 { 00:14:20.290 "subsystem": "sock", 00:14:20.290 "config": [ 00:14:20.290 { 00:14:20.290 "method": "sock_set_default_impl", 00:14:20.290 "params": { 00:14:20.290 "impl_name": "uring" 00:14:20.290 } 00:14:20.290 }, 00:14:20.290 { 00:14:20.290 "method": "sock_impl_set_options", 00:14:20.290 "params": { 00:14:20.290 "impl_name": "ssl", 00:14:20.290 "recv_buf_size": 4096, 00:14:20.290 "send_buf_size": 4096, 00:14:20.290 "enable_recv_pipe": true, 00:14:20.290 "enable_quickack": false, 00:14:20.290 "enable_placement_id": 0, 00:14:20.290 "enable_zerocopy_send_server": true, 00:14:20.290 "enable_zerocopy_send_client": false, 00:14:20.290 "zerocopy_threshold": 0, 00:14:20.290 "tls_version": 0, 00:14:20.290 "enable_ktls": false 00:14:20.290 } 00:14:20.290 }, 00:14:20.290 { 00:14:20.290 "method": "sock_impl_set_options", 00:14:20.290 "params": { 00:14:20.290 "impl_name": "posix", 00:14:20.290 "recv_buf_size": 2097152, 00:14:20.290 "send_buf_size": 2097152, 00:14:20.290 "enable_recv_pipe": true, 00:14:20.290 "enable_quickack": false, 00:14:20.290 "enable_placement_id": 0, 00:14:20.290 "enable_zerocopy_send_server": true, 00:14:20.290 "enable_zerocopy_send_client": false, 00:14:20.290 "zerocopy_threshold": 0, 00:14:20.290 "tls_version": 0, 00:14:20.290 "enable_ktls": false 00:14:20.290 } 00:14:20.290 }, 00:14:20.290 { 00:14:20.290 "method": "sock_impl_set_options", 00:14:20.290 "params": { 00:14:20.290 "impl_name": "uring", 00:14:20.290 "recv_buf_size": 2097152, 00:14:20.290 "send_buf_size": 2097152, 00:14:20.290 "enable_recv_pipe": true, 00:14:20.290 "enable_quickack": false, 00:14:20.290 "enable_placement_id": 0, 00:14:20.290 "enable_zerocopy_send_server": false, 00:14:20.290 "enable_zerocopy_send_client": false, 00:14:20.290 "zerocopy_threshold": 0, 00:14:20.290 "tls_version": 0, 00:14:20.290 "enable_ktls": false 00:14:20.290 } 00:14:20.290 } 00:14:20.290 ] 00:14:20.290 }, 00:14:20.290 { 00:14:20.290 "subsystem": "vmd", 00:14:20.290 "config": [] 00:14:20.290 }, 00:14:20.290 { 00:14:20.290 "subsystem": "accel", 00:14:20.290 "config": [ 00:14:20.290 { 00:14:20.290 "method": "accel_set_options", 00:14:20.290 "params": { 00:14:20.290 "small_cache_size": 128, 00:14:20.290 "large_cache_size": 16, 00:14:20.290 "task_count": 2048, 00:14:20.290 "sequence_count": 2048, 00:14:20.290 "buf_count": 2048 00:14:20.290 } 00:14:20.290 } 00:14:20.290 ] 00:14:20.290 }, 00:14:20.290 { 00:14:20.290 "subsystem": "bdev", 00:14:20.290 "config": [ 00:14:20.290 { 00:14:20.290 "method": "bdev_set_options", 00:14:20.290 "params": { 00:14:20.290 "bdev_io_pool_size": 65535, 00:14:20.290 "bdev_io_cache_size": 256, 00:14:20.290 "bdev_auto_examine": true, 00:14:20.290 "iobuf_small_cache_size": 128, 00:14:20.290 "iobuf_large_cache_size": 16 00:14:20.290 } 00:14:20.290 }, 00:14:20.290 { 00:14:20.290 "method": "bdev_raid_set_options", 00:14:20.290 "params": { 00:14:20.290 "process_window_size_kb": 1024, 00:14:20.290 "process_max_bandwidth_mb_sec": 0 00:14:20.290 } 00:14:20.290 }, 00:14:20.290 { 00:14:20.290 "method": "bdev_iscsi_set_options", 00:14:20.290 "params": { 00:14:20.290 "timeout_sec": 30 00:14:20.290 } 00:14:20.290 }, 00:14:20.290 { 00:14:20.290 "method": "bdev_nvme_set_options", 00:14:20.290 "params": { 00:14:20.290 "action_on_timeout": "none", 00:14:20.290 "timeout_us": 0, 00:14:20.290 "timeout_admin_us": 0, 00:14:20.290 "keep_alive_timeout_ms": 10000, 00:14:20.290 "arbitration_burst": 0, 00:14:20.290 "low_priority_weight": 0, 00:14:20.290 "medium_priority_weight": 0, 00:14:20.290 "high_priority_weight": 0, 00:14:20.290 "nvme_adminq_poll_period_us": 10000, 00:14:20.290 "nvme_ioq_poll_period_us": 0, 00:14:20.290 "io_queue_requests": 0, 00:14:20.290 "delay_cmd_submit": true, 00:14:20.290 "transport_retry_count": 4, 00:14:20.290 "bdev_retry_count": 3, 00:14:20.290 "transport_ack_timeout": 0, 00:14:20.290 "ctrlr_loss_timeout_sec": 0, 00:14:20.290 "reconnect_delay_sec": 0, 00:14:20.290 "fast_io_fail_timeout_sec": 0, 00:14:20.290 "disable_auto_failback": false, 00:14:20.290 "generate_uuids": false, 00:14:20.290 "transport_tos": 0, 00:14:20.290 "nvme_error_stat": false, 00:14:20.290 "rdma_srq_size": 0, 00:14:20.290 "io_path_stat": false, 00:14:20.290 "allow_accel_sequence": false, 00:14:20.290 "rdma_max_cq_size": 0, 00:14:20.290 "rdma_cm_event_timeout_ms": 0, 00:14:20.290 "dhchap_digests": [ 00:14:20.290 "sha256", 00:14:20.290 "sha384", 00:14:20.290 "sha512" 00:14:20.290 ], 00:14:20.290 "dhchap_dhgroups": [ 00:14:20.290 "null", 00:14:20.290 "ffdhe2048", 00:14:20.290 "ffdhe3072", 00:14:20.290 "ffdhe4096", 00:14:20.290 "ffdhe6144", 00:14:20.290 "ffdhe8192" 00:14:20.290 ] 00:14:20.290 } 00:14:20.290 }, 00:14:20.290 { 00:14:20.290 "method": "bdev_nvme_set_hotplug", 00:14:20.290 "params": { 00:14:20.290 "period_us": 100000, 00:14:20.290 "enable": false 00:14:20.290 } 00:14:20.290 }, 00:14:20.290 { 00:14:20.290 "method": "bdev_malloc_create", 00:14:20.290 "params": { 00:14:20.290 "name": "malloc0", 00:14:20.290 "num_blocks": 8192, 00:14:20.290 "block_size": 4096, 00:14:20.290 "physical_block_size": 4096, 00:14:20.290 "uuid": "1357c82a-b8fe-410e-b91d-3627fb15031e", 00:14:20.290 "optimal_io_boundary": 0, 00:14:20.290 "md_size": 0, 00:14:20.290 "dif_type": 0, 00:14:20.290 "dif_is_head_of_md": false, 00:14:20.290 "dif_pi_format": 0 00:14:20.290 } 00:14:20.290 }, 00:14:20.290 { 00:14:20.290 "method": "bdev_wait_for_examine" 00:14:20.290 } 00:14:20.290 ] 00:14:20.290 }, 00:14:20.290 { 00:14:20.290 "subsystem": "nbd", 00:14:20.290 "config": [] 00:14:20.290 }, 00:14:20.290 { 00:14:20.290 "subsystem": "scheduler", 00:14:20.290 "config": [ 00:14:20.290 { 00:14:20.291 "method": "framework_set_scheduler", 00:14:20.291 "params": { 00:14:20.291 "name": "static" 00:14:20.291 } 00:14:20.291 } 00:14:20.291 ] 00:14:20.291 }, 00:14:20.291 { 00:14:20.291 "subsystem": "nvmf", 00:14:20.291 "config": [ 00:14:20.291 { 00:14:20.291 "method": "nvmf_set_config", 00:14:20.291 "params": { 00:14:20.291 "discovery_filter": "match_any", 00:14:20.291 "admin_cmd_passthru": { 00:14:20.291 "identify_ctrlr": false 00:14:20.291 }, 00:14:20.291 "dhchap_digests": [ 00:14:20.291 "sha256", 00:14:20.291 "sha384", 00:14:20.291 "sha512" 00:14:20.291 ], 00:14:20.291 "dhchap_dhgroups": [ 00:14:20.291 "null", 00:14:20.291 "ffdhe2048", 00:14:20.291 "ffdhe3072", 00:14:20.291 "ffdhe4096", 00:14:20.291 "ffdhe6144", 00:14:20.291 "ffdhe8192" 00:14:20.291 ] 00:14:20.291 } 00:14:20.291 }, 00:14:20.291 { 00:14:20.291 "method": "nvmf_set_max_subsystems", 00:14:20.291 "params": { 00:14:20.291 "max_subsystems": 1024 00:14:20.291 } 00:14:20.291 }, 00:14:20.291 { 00:14:20.291 "method": "nvmf_set_crdt", 00:14:20.291 "params": { 00:14:20.291 "crdt1": 0, 00:14:20.291 "crdt2": 0, 00:14:20.291 "crdt3": 0 00:14:20.291 } 00:14:20.291 }, 00:14:20.291 { 00:14:20.291 "method": "nvmf_create_transport", 00:14:20.291 "params": { 00:14:20.291 "trtype": "TCP", 00:14:20.291 "max_queue_depth": 128, 00:14:20.291 "max_io_qpairs_per_ctrlr": 127, 00:14:20.291 "in_capsule_data_size": 4096, 00:14:20.291 "max_io_size": 131072, 00:14:20.291 "io_unit_size": 131072, 00:14:20.291 "max_aq_depth": 128, 00:14:20.291 "num_shared_buffers": 511, 00:14:20.291 "buf_cache_size": 4294967295, 00:14:20.291 "dif_insert_or_strip": false, 00:14:20.291 "zcopy": false, 00:14:20.291 "c2h_success": false, 00:14:20.291 "sock_priority": 0, 00:14:20.291 "abort_timeout_sec": 1, 00:14:20.291 "ack_timeout": 0, 00:14:20.291 "data_wr_pool_size": 0 00:14:20.291 } 00:14:20.291 }, 00:14:20.291 { 00:14:20.291 "method": "nvmf_create_subsystem", 00:14:20.291 "params": { 00:14:20.291 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.291 "allow_any_host": false, 00:14:20.291 "serial_number": "00000000000000000000", 00:14:20.291 "model_number": "SPDK bdev Controller", 00:14:20.291 "max_namespaces": 32, 00:14:20.291 "min_cntlid": 1, 00:14:20.291 "max_cntlid": 65519, 00:14:20.291 "ana_reporting": false 00:14:20.291 } 00:14:20.291 }, 00:14:20.291 { 00:14:20.291 "method": "nvmf_subsystem_add_host", 00:14:20.291 "params": { 00:14:20.291 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.291 "host": "nqn.2016-06.io.spdk:host1", 00:14:20.291 "psk": "key0" 00:14:20.291 } 00:14:20.291 }, 00:14:20.291 { 00:14:20.291 "method": "nvmf_subsystem_add_ns", 00:14:20.291 "params": { 00:14:20.291 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.291 "namespace": { 00:14:20.291 "nsid": 1, 00:14:20.291 "bdev_name": "malloc0", 00:14:20.291 "nguid": "1357C82AB8FE410EB91D3627FB15031E", 00:14:20.291 "uuid": "1357c82a-b8fe-410e-b91d-3627fb15031e", 00:14:20.291 "no_auto_visible": false 00:14:20.291 } 00:14:20.291 } 00:14:20.291 }, 00:14:20.291 { 00:14:20.291 "method": "nvmf_subsystem_add_listener", 00:14:20.291 "params": { 00:14:20.291 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.291 "listen_address": { 00:14:20.291 "trtype": "TCP", 00:14:20.291 "adrfam": "IPv4", 00:14:20.291 "traddr": "10.0.0.3", 00:14:20.291 "trsvcid": "4420" 00:14:20.291 }, 00:14:20.291 "secure_channel": false, 00:14:20.291 "sock_impl": "ssl" 00:14:20.291 } 00:14:20.291 } 00:14:20.291 ] 00:14:20.291 } 00:14:20.291 ] 00:14:20.291 }' 00:14:20.291 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:20.550 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:14:20.550 "subsystems": [ 00:14:20.550 { 00:14:20.550 "subsystem": "keyring", 00:14:20.550 "config": [ 00:14:20.550 { 00:14:20.550 "method": "keyring_file_add_key", 00:14:20.550 "params": { 00:14:20.550 "name": "key0", 00:14:20.550 "path": "/tmp/tmp.fVGwUt2wMd" 00:14:20.550 } 00:14:20.550 } 00:14:20.550 ] 00:14:20.550 }, 00:14:20.550 { 00:14:20.550 "subsystem": "iobuf", 00:14:20.550 "config": [ 00:14:20.550 { 00:14:20.550 "method": "iobuf_set_options", 00:14:20.550 "params": { 00:14:20.550 "small_pool_count": 8192, 00:14:20.550 "large_pool_count": 1024, 00:14:20.550 "small_bufsize": 8192, 00:14:20.550 "large_bufsize": 135168, 00:14:20.550 "enable_numa": false 00:14:20.550 } 00:14:20.550 } 00:14:20.550 ] 00:14:20.550 }, 00:14:20.550 { 00:14:20.550 "subsystem": "sock", 00:14:20.550 "config": [ 00:14:20.550 { 00:14:20.550 "method": "sock_set_default_impl", 00:14:20.550 "params": { 00:14:20.550 "impl_name": "uring" 00:14:20.550 } 00:14:20.551 }, 00:14:20.551 { 00:14:20.551 "method": "sock_impl_set_options", 00:14:20.551 "params": { 00:14:20.551 "impl_name": "ssl", 00:14:20.551 "recv_buf_size": 4096, 00:14:20.551 "send_buf_size": 4096, 00:14:20.551 "enable_recv_pipe": true, 00:14:20.551 "enable_quickack": false, 00:14:20.551 "enable_placement_id": 0, 00:14:20.551 "enable_zerocopy_send_server": true, 00:14:20.551 "enable_zerocopy_send_client": false, 00:14:20.551 "zerocopy_threshold": 0, 00:14:20.551 "tls_version": 0, 00:14:20.551 "enable_ktls": false 00:14:20.551 } 00:14:20.551 }, 00:14:20.551 { 00:14:20.551 "method": "sock_impl_set_options", 00:14:20.551 "params": { 00:14:20.551 "impl_name": "posix", 00:14:20.551 "recv_buf_size": 2097152, 00:14:20.551 "send_buf_size": 2097152, 00:14:20.551 "enable_recv_pipe": true, 00:14:20.551 "enable_quickack": false, 00:14:20.551 "enable_placement_id": 0, 00:14:20.551 "enable_zerocopy_send_server": true, 00:14:20.551 "enable_zerocopy_send_client": false, 00:14:20.551 "zerocopy_threshold": 0, 00:14:20.551 "tls_version": 0, 00:14:20.551 "enable_ktls": false 00:14:20.551 } 00:14:20.551 }, 00:14:20.551 { 00:14:20.551 "method": "sock_impl_set_options", 00:14:20.551 "params": { 00:14:20.551 "impl_name": "uring", 00:14:20.551 "recv_buf_size": 2097152, 00:14:20.551 "send_buf_size": 2097152, 00:14:20.551 "enable_recv_pipe": true, 00:14:20.551 "enable_quickack": false, 00:14:20.551 "enable_placement_id": 0, 00:14:20.551 "enable_zerocopy_send_server": false, 00:14:20.551 "enable_zerocopy_send_client": false, 00:14:20.551 "zerocopy_threshold": 0, 00:14:20.551 "tls_version": 0, 00:14:20.551 "enable_ktls": false 00:14:20.551 } 00:14:20.551 } 00:14:20.551 ] 00:14:20.551 }, 00:14:20.551 { 00:14:20.551 "subsystem": "vmd", 00:14:20.551 "config": [] 00:14:20.551 }, 00:14:20.551 { 00:14:20.551 "subsystem": "accel", 00:14:20.551 "config": [ 00:14:20.551 { 00:14:20.551 "method": "accel_set_options", 00:14:20.551 "params": { 00:14:20.551 "small_cache_size": 128, 00:14:20.551 "large_cache_size": 16, 00:14:20.551 "task_count": 2048, 00:14:20.551 "sequence_count": 2048, 00:14:20.551 "buf_count": 2048 00:14:20.551 } 00:14:20.551 } 00:14:20.551 ] 00:14:20.551 }, 00:14:20.551 { 00:14:20.551 "subsystem": "bdev", 00:14:20.551 "config": [ 00:14:20.551 { 00:14:20.551 "method": "bdev_set_options", 00:14:20.551 "params": { 00:14:20.551 "bdev_io_pool_size": 65535, 00:14:20.551 "bdev_io_cache_size": 256, 00:14:20.551 "bdev_auto_examine": true, 00:14:20.551 "iobuf_small_cache_size": 128, 00:14:20.551 "iobuf_large_cache_size": 16 00:14:20.551 } 00:14:20.551 }, 00:14:20.551 { 00:14:20.551 "method": "bdev_raid_set_options", 00:14:20.551 "params": { 00:14:20.551 "process_window_size_kb": 1024, 00:14:20.551 "process_max_bandwidth_mb_sec": 0 00:14:20.551 } 00:14:20.551 }, 00:14:20.551 { 00:14:20.551 "method": "bdev_iscsi_set_options", 00:14:20.551 "params": { 00:14:20.551 "timeout_sec": 30 00:14:20.551 } 00:14:20.551 }, 00:14:20.551 { 00:14:20.551 "method": "bdev_nvme_set_options", 00:14:20.551 "params": { 00:14:20.551 "action_on_timeout": "none", 00:14:20.551 "timeout_us": 0, 00:14:20.551 "timeout_admin_us": 0, 00:14:20.551 "keep_alive_timeout_ms": 10000, 00:14:20.551 "arbitration_burst": 0, 00:14:20.551 "low_priority_weight": 0, 00:14:20.551 "medium_priority_weight": 0, 00:14:20.551 "high_priority_weight": 0, 00:14:20.551 "nvme_adminq_poll_period_us": 10000, 00:14:20.551 "nvme_ioq_poll_period_us": 0, 00:14:20.551 "io_queue_requests": 512, 00:14:20.551 "delay_cmd_submit": true, 00:14:20.551 "transport_retry_count": 4, 00:14:20.551 "bdev_retry_count": 3, 00:14:20.551 "transport_ack_timeout": 0, 00:14:20.551 "ctrlr_loss_timeout_sec": 0, 00:14:20.551 "reconnect_delay_sec": 0, 00:14:20.551 "fast_io_fail_timeout_sec": 0, 00:14:20.551 "disable_auto_failback": false, 00:14:20.551 "generate_uuids": false, 00:14:20.551 "transport_tos": 0, 00:14:20.551 "nvme_error_stat": false, 00:14:20.551 "rdma_srq_size": 0, 00:14:20.551 "io_path_stat": false, 00:14:20.551 "allow_accel_sequence": false, 00:14:20.551 "rdma_max_cq_size": 0, 00:14:20.551 "rdma_cm_event_timeout_ms": 0, 00:14:20.551 "dhchap_digests": [ 00:14:20.551 "sha256", 00:14:20.551 "sha384", 00:14:20.551 "sha512" 00:14:20.551 ], 00:14:20.551 "dhchap_dhgroups": [ 00:14:20.551 "null", 00:14:20.551 "ffdhe2048", 00:14:20.551 "ffdhe3072", 00:14:20.551 "ffdhe4096", 00:14:20.551 "ffdhe6144", 00:14:20.551 "ffdhe8192" 00:14:20.551 ] 00:14:20.551 } 00:14:20.551 }, 00:14:20.551 { 00:14:20.551 "method": "bdev_nvme_attach_controller", 00:14:20.551 "params": { 00:14:20.551 "name": "nvme0", 00:14:20.551 "trtype": "TCP", 00:14:20.551 "adrfam": "IPv4", 00:14:20.551 "traddr": "10.0.0.3", 00:14:20.551 "trsvcid": "4420", 00:14:20.551 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.551 "prchk_reftag": false, 00:14:20.551 "prchk_guard": false, 00:14:20.551 "ctrlr_loss_timeout_sec": 0, 00:14:20.551 "reconnect_delay_sec": 0, 00:14:20.551 "fast_io_fail_timeout_sec": 0, 00:14:20.551 "psk": "key0", 00:14:20.551 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:20.551 "hdgst": false, 00:14:20.551 "ddgst": false, 00:14:20.551 "multipath": "multipath" 00:14:20.551 } 00:14:20.551 }, 00:14:20.551 { 00:14:20.551 "method": "bdev_nvme_set_hotplug", 00:14:20.551 "params": { 00:14:20.551 "period_us": 100000, 00:14:20.551 "enable": false 00:14:20.551 } 00:14:20.551 }, 00:14:20.551 { 00:14:20.551 "method": "bdev_enable_histogram", 00:14:20.551 "params": { 00:14:20.551 "name": "nvme0n1", 00:14:20.551 "enable": true 00:14:20.551 } 00:14:20.551 }, 00:14:20.551 { 00:14:20.551 "method": "bdev_wait_for_examine" 00:14:20.551 } 00:14:20.551 ] 00:14:20.551 }, 00:14:20.551 { 00:14:20.551 "subsystem": "nbd", 00:14:20.551 "config": [] 00:14:20.551 } 00:14:20.551 ] 00:14:20.551 }' 00:14:20.551 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72546 00:14:20.551 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72546 ']' 00:14:20.551 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72546 00:14:20.551 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:20.551 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:20.551 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72546 00:14:20.551 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:20.551 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:20.551 killing process with pid 72546 00:14:20.551 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72546' 00:14:20.551 Received shutdown signal, test time was about 1.000000 seconds 00:14:20.551 00:14:20.551 Latency(us) 00:14:20.551 [2024-11-26T20:35:20.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.551 [2024-11-26T20:35:20.906Z] =================================================================================================================== 00:14:20.551 [2024-11-26T20:35:20.906Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:20.551 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72546 00:14:20.551 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72546 00:14:20.811 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72510 00:14:20.811 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72510 ']' 00:14:20.811 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72510 00:14:20.811 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:20.811 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:20.811 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72510 00:14:20.811 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:20.811 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:20.811 killing process with pid 72510 00:14:20.811 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72510' 00:14:20.811 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72510 00:14:20.811 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72510 00:14:21.071 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:14:21.071 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:14:21.071 "subsystems": [ 00:14:21.071 { 00:14:21.071 "subsystem": "keyring", 00:14:21.071 "config": [ 00:14:21.071 { 00:14:21.071 "method": "keyring_file_add_key", 00:14:21.071 "params": { 00:14:21.071 "name": "key0", 00:14:21.071 "path": "/tmp/tmp.fVGwUt2wMd" 00:14:21.071 } 00:14:21.071 } 00:14:21.071 ] 00:14:21.071 }, 00:14:21.071 { 00:14:21.071 "subsystem": "iobuf", 00:14:21.071 "config": [ 00:14:21.071 { 00:14:21.071 "method": "iobuf_set_options", 00:14:21.071 "params": { 00:14:21.071 "small_pool_count": 8192, 00:14:21.071 "large_pool_count": 1024, 00:14:21.071 "small_bufsize": 8192, 00:14:21.071 "large_bufsize": 135168, 00:14:21.071 "enable_numa": false 00:14:21.071 } 00:14:21.071 } 00:14:21.071 ] 00:14:21.071 }, 00:14:21.071 { 00:14:21.071 "subsystem": "sock", 00:14:21.071 "config": [ 00:14:21.071 { 00:14:21.071 "method": "sock_set_default_impl", 00:14:21.071 "params": { 00:14:21.071 "impl_name": "uring" 00:14:21.071 } 00:14:21.071 }, 00:14:21.071 { 00:14:21.071 "method": "sock_impl_set_options", 00:14:21.071 "params": { 00:14:21.071 "impl_name": "ssl", 00:14:21.071 "recv_buf_size": 4096, 00:14:21.071 "send_buf_size": 4096, 00:14:21.071 "enable_recv_pipe": true, 00:14:21.071 "enable_quickack": false, 00:14:21.071 "enable_placement_id": 0, 00:14:21.071 "enable_zerocopy_send_server": true, 00:14:21.071 "enable_zerocopy_send_client": false, 00:14:21.071 "zerocopy_threshold": 0, 00:14:21.071 "tls_version": 0, 00:14:21.071 "enable_ktls": false 00:14:21.071 } 00:14:21.071 }, 00:14:21.071 { 00:14:21.071 "method": "sock_impl_set_options", 00:14:21.071 "params": { 00:14:21.071 "impl_name": "posix", 00:14:21.071 "recv_buf_size": 2097152, 00:14:21.071 "send_buf_size": 2097152, 00:14:21.071 "enable_recv_pipe": true, 00:14:21.071 "enable_quickack": false, 00:14:21.071 "enable_placement_id": 0, 00:14:21.071 "enable_zerocopy_send_server": true, 00:14:21.071 "enable_zerocopy_send_client": false, 00:14:21.071 "zerocopy_threshold": 0, 00:14:21.071 "tls_version": 0, 00:14:21.071 "enable_ktls": false 00:14:21.071 } 00:14:21.071 }, 00:14:21.071 { 00:14:21.071 "method": "sock_impl_set_options", 00:14:21.071 "params": { 00:14:21.071 "impl_name": "uring", 00:14:21.071 "recv_buf_size": 2097152, 00:14:21.071 "send_buf_size": 2097152, 00:14:21.071 "enable_recv_pipe": true, 00:14:21.071 "enable_quickack": false, 00:14:21.071 "enable_placement_id": 0, 00:14:21.071 "enable_zerocopy_send_server": false, 00:14:21.071 "enable_zerocopy_send_client": false, 00:14:21.071 "zerocopy_threshold": 0, 00:14:21.071 "tls_version": 0, 00:14:21.071 "enable_ktls": false 00:14:21.071 } 00:14:21.071 } 00:14:21.071 ] 00:14:21.071 }, 00:14:21.071 { 00:14:21.071 "subsystem": "vmd", 00:14:21.071 "config": [] 00:14:21.071 }, 00:14:21.071 { 00:14:21.071 "subsystem": "accel", 00:14:21.071 "config": [ 00:14:21.071 { 00:14:21.071 "method": "accel_set_options", 00:14:21.071 "params": { 00:14:21.071 "small_cache_size": 128, 00:14:21.071 "large_cache_size": 16, 00:14:21.071 "task_count": 2048, 00:14:21.071 "sequence_count": 2048, 00:14:21.071 "buf_count": 2048 00:14:21.071 } 00:14:21.071 } 00:14:21.071 ] 00:14:21.071 }, 00:14:21.071 { 00:14:21.071 "subsystem": "bdev", 00:14:21.071 "config": [ 00:14:21.071 { 00:14:21.071 "method": "bdev_set_options", 00:14:21.071 "params": { 00:14:21.071 "bdev_io_pool_size": 65535, 00:14:21.071 "bdev_io_cache_size": 256, 00:14:21.071 "bdev_auto_examine": true, 00:14:21.071 "iobuf_small_cache_size": 128, 00:14:21.071 "iobuf_large_cache_size": 16 00:14:21.071 } 00:14:21.071 }, 00:14:21.071 { 00:14:21.071 "method": "bdev_raid_set_options", 00:14:21.071 "params": { 00:14:21.071 "process_window_size_kb": 1024, 00:14:21.071 "process_max_bandwidth_mb_sec": 0 00:14:21.071 } 00:14:21.071 }, 00:14:21.071 { 00:14:21.071 "method": "bdev_iscsi_set_options", 00:14:21.071 "params": { 00:14:21.071 "timeout_sec": 30 00:14:21.071 } 00:14:21.071 }, 00:14:21.071 { 00:14:21.071 "method": "bdev_nvme_set_options", 00:14:21.071 "params": { 00:14:21.071 "action_on_timeout": "none", 00:14:21.071 "timeout_us": 0, 00:14:21.071 "timeout_admin_us": 0, 00:14:21.071 "keep_alive_timeout_ms": 10000, 00:14:21.071 "arbitration_burst": 0, 00:14:21.071 "low_priority_weight": 0, 00:14:21.071 "medium_priority_weight": 0, 00:14:21.071 "high_priority_weight": 0, 00:14:21.071 "nvme_adminq_poll_period_us": 10000, 00:14:21.071 "nvme_ioq_poll_period_us": 0, 00:14:21.071 "io_queue_requests": 0, 00:14:21.071 "delay_cmd_submit": true, 00:14:21.071 "transport_retry_count": 4, 00:14:21.071 "bdev_retry_count": 3, 00:14:21.071 "transport_ack_timeout": 0, 00:14:21.071 "ctrlr_loss_timeout_sec": 0, 00:14:21.072 "reconnect_delay_sec": 0, 00:14:21.072 "fast_io_fail_timeout_sec": 0, 00:14:21.072 "disable_auto_failback": false, 00:14:21.072 "generate_uuids": false, 00:14:21.072 "transport_tos": 0, 00:14:21.072 "nvme_error_stat": false, 00:14:21.072 "rdma_srq_size": 0, 00:14:21.072 "io_path_stat": false, 00:14:21.072 "allow_accel_sequence": false, 00:14:21.072 "rdma_max_cq_size": 0, 00:14:21.072 "rdma_cm_event_timeout_ms": 0, 00:14:21.072 "dhchap_digests": [ 00:14:21.072 "sha256", 00:14:21.072 "sha384", 00:14:21.072 "sha512" 00:14:21.072 ], 00:14:21.072 "dhchap_dhgroups": [ 00:14:21.072 "null", 00:14:21.072 "ffdhe2048", 00:14:21.072 "ffdhe3072", 00:14:21.072 "ffdhe4096", 00:14:21.072 "ffdhe6144", 00:14:21.072 "ffdhe8192" 00:14:21.072 ] 00:14:21.072 } 00:14:21.072 }, 00:14:21.072 { 00:14:21.072 "method": "bdev_nvme_set_hotplug", 00:14:21.072 "params": { 00:14:21.072 "period_us": 100000, 00:14:21.072 "enable": false 00:14:21.072 } 00:14:21.072 }, 00:14:21.072 { 00:14:21.072 "method": "bdev_malloc_create", 00:14:21.072 "params": { 00:14:21.072 "name": "malloc0", 00:14:21.072 "num_blocks": 8192, 00:14:21.072 "block_size": 4096, 00:14:21.072 "physical_block_size": 4096, 00:14:21.072 "uuid": "1357c82a-b8fe-410e-b91d-3627fb15031e", 00:14:21.072 "optimal_io_boundary": 0, 00:14:21.072 "md_size": 0, 00:14:21.072 "dif_type": 0, 00:14:21.072 "dif_is_head_of_md": false, 00:14:21.072 "dif_pi_format": 0 00:14:21.072 } 00:14:21.072 }, 00:14:21.072 { 00:14:21.072 "method": "bdev_wait_for_examine" 00:14:21.072 } 00:14:21.072 ] 00:14:21.072 }, 00:14:21.072 { 00:14:21.072 "subsystem": "nbd", 00:14:21.072 "config": [] 00:14:21.072 }, 00:14:21.072 { 00:14:21.072 "subsystem": "scheduler", 00:14:21.072 "config": [ 00:14:21.072 { 00:14:21.072 "method": "framework_set_scheduler", 00:14:21.072 "params": { 00:14:21.072 "name": "static" 00:14:21.072 } 00:14:21.072 } 00:14:21.072 ] 00:14:21.072 }, 00:14:21.072 { 00:14:21.072 "subsystem": "nvmf", 00:14:21.072 "config": [ 00:14:21.072 { 00:14:21.072 "method": "nvmf_set_config", 00:14:21.072 "params": { 00:14:21.072 "discovery_filter": "match_any", 00:14:21.072 "admin_cmd_passthru": { 00:14:21.072 "identify_ctrlr": false 00:14:21.072 }, 00:14:21.072 "dhchap_digests": [ 00:14:21.072 "sha256", 00:14:21.072 "sha384", 00:14:21.072 "sha512" 00:14:21.072 ], 00:14:21.072 "dhchap_dhgroups": [ 00:14:21.072 "null", 00:14:21.072 "ffdhe2048", 00:14:21.072 "ffdhe3072", 00:14:21.072 "ffdhe4096", 00:14:21.072 "ffdhe6144", 00:14:21.072 "ffdhe8192" 00:14:21.072 ] 00:14:21.072 } 00:14:21.072 }, 00:14:21.072 { 00:14:21.072 "method": "nvmf_set_max_subsystems", 00:14:21.072 "params": { 00:14:21.072 "max_subsystems": 1024 00:14:21.072 } 00:14:21.072 }, 00:14:21.072 { 00:14:21.072 "method": "nvmf_set_crdt", 00:14:21.072 "params": { 00:14:21.072 "crdt1": 0, 00:14:21.072 "crdt2": 0, 00:14:21.072 "crdt3": 0 00:14:21.072 } 00:14:21.072 }, 00:14:21.072 { 00:14:21.072 "method": "nvmf_create_transport", 00:14:21.072 "params": { 00:14:21.072 "trtype": "TCP", 00:14:21.072 "max_queue_depth": 128, 00:14:21.072 "max_io_qpairs_per_ctrlr": 127, 00:14:21.072 "in_capsule_data_size": 4096, 00:14:21.072 "max_io_size": 131072, 00:14:21.072 "io_unit_size": 131072, 00:14:21.072 "max_aq_depth": 128, 00:14:21.072 "num_shared_buffers": 511, 00:14:21.072 "buf_cache_size": 4294967295, 00:14:21.072 "dif_insert_or_strip": false, 00:14:21.072 "zcopy": false, 00:14:21.072 "c2h_success": false, 00:14:21.072 "sock_priority": 0, 00:14:21.072 "abort_timeout_sec": 1, 00:14:21.072 "ack_timeout": 0, 00:14:21.072 "data_wr_pool_size": 0 00:14:21.072 } 00:14:21.072 }, 00:14:21.072 { 00:14:21.072 "method": "nvmf_create_subsystem", 00:14:21.072 "params": { 00:14:21.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.072 "allow_any_host": false, 00:14:21.072 "serial_number": "00000000000000000000", 00:14:21.072 "model_number": "SPDK bdev Controller", 00:14:21.072 "max_namespaces": 32, 00:14:21.072 "min_cntlid": 1, 00:14:21.072 "max_cntlid": 65519, 00:14:21.072 "ana_reporting": false 00:14:21.072 } 00:14:21.072 }, 00:14:21.072 { 00:14:21.072 "method": "nvmf_subsystem_add_host", 00:14:21.072 "params": { 00:14:21.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.072 "host": "nqn.2016-06.io.spdk:host1", 00:14:21.072 "psk": "key0" 00:14:21.072 } 00:14:21.072 }, 00:14:21.072 { 00:14:21.072 "method": "nvmf_subsystem_add_ns", 00:14:21.072 "params": { 00:14:21.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.072 "namespace": { 00:14:21.072 "nsid": 1, 00:14:21.072 "bdev_name": "malloc0", 00:14:21.072 "nguid": "1357C82AB8FE410EB91D3627FB15031E", 00:14:21.072 "uuid": "1357c82a-b8fe-410e-b91d-3627fb15031e", 00:14:21.072 "no_auto_visible": false 00:14:21.072 } 00:14:21.072 } 00:14:21.072 }, 00:14:21.072 { 00:14:21.072 "method": "nvmf_subsystem_add_listener", 00:14:21.072 "params": { 00:14:21.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.072 "listen_address": { 00:14:21.072 "trtype": "TCP", 00:14:21.072 "adrfam": "IPv4", 00:14:21.072 "traddr": "10.0.0.3", 00:14:21.072 "trsvcid": "4420" 00:14:21.072 }, 00:14:21.072 "secure_channel": false, 00:14:21.072 "sock_impl": "ssl" 00:14:21.072 } 00:14:21.072 } 00:14:21.072 ] 00:14:21.072 } 00:14:21.072 ] 00:14:21.072 }' 00:14:21.072 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:21.072 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:21.072 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.072 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72609 00:14:21.072 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:21.072 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72609 00:14:21.072 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72609 ']' 00:14:21.072 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.072 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.072 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.072 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.072 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.072 [2024-11-26 20:35:21.305753] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:14:21.072 [2024-11-26 20:35:21.305842] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.332 [2024-11-26 20:35:21.456161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.332 [2024-11-26 20:35:21.525011] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.332 [2024-11-26 20:35:21.525075] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.332 [2024-11-26 20:35:21.525089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.332 [2024-11-26 20:35:21.525100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.332 [2024-11-26 20:35:21.525110] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.332 [2024-11-26 20:35:21.525659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.591 [2024-11-26 20:35:21.697717] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:21.591 [2024-11-26 20:35:21.785879] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.591 [2024-11-26 20:35:21.817821] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:21.591 [2024-11-26 20:35:21.818047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:22.160 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:22.160 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:22.160 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:22.160 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:22.160 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:22.160 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.160 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72641 00:14:22.160 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72641 /var/tmp/bdevperf.sock 00:14:22.160 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72641 ']' 00:14:22.160 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:22.160 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:22.160 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:22.160 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:22.160 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:22.160 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.160 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:14:22.160 "subsystems": [ 00:14:22.160 { 00:14:22.160 "subsystem": "keyring", 00:14:22.160 "config": [ 00:14:22.160 { 00:14:22.160 "method": "keyring_file_add_key", 00:14:22.160 "params": { 00:14:22.160 "name": "key0", 00:14:22.160 "path": "/tmp/tmp.fVGwUt2wMd" 00:14:22.160 } 00:14:22.160 } 00:14:22.160 ] 00:14:22.160 }, 00:14:22.160 { 00:14:22.160 "subsystem": "iobuf", 00:14:22.160 "config": [ 00:14:22.160 { 00:14:22.160 "method": "iobuf_set_options", 00:14:22.160 "params": { 00:14:22.160 "small_pool_count": 8192, 00:14:22.160 "large_pool_count": 1024, 00:14:22.160 "small_bufsize": 8192, 00:14:22.160 "large_bufsize": 135168, 00:14:22.160 "enable_numa": false 00:14:22.160 } 00:14:22.160 } 00:14:22.160 ] 00:14:22.160 }, 00:14:22.160 { 00:14:22.160 "subsystem": "sock", 00:14:22.160 "config": [ 00:14:22.160 { 00:14:22.160 "method": "sock_set_default_impl", 00:14:22.160 "params": { 00:14:22.160 "impl_name": "uring" 00:14:22.160 } 00:14:22.160 }, 00:14:22.160 { 00:14:22.160 "method": "sock_impl_set_options", 00:14:22.160 "params": { 00:14:22.160 "impl_name": "ssl", 00:14:22.160 "recv_buf_size": 4096, 00:14:22.160 "send_buf_size": 4096, 00:14:22.160 "enable_recv_pipe": true, 00:14:22.160 "enable_quickack": false, 00:14:22.160 "enable_placement_id": 0, 00:14:22.160 "enable_zerocopy_send_server": true, 00:14:22.160 "enable_zerocopy_send_client": false, 00:14:22.160 "zerocopy_threshold": 0, 00:14:22.160 "tls_version": 0, 00:14:22.160 "enable_ktls": false 00:14:22.160 } 00:14:22.160 }, 00:14:22.160 { 00:14:22.160 "method": "sock_impl_set_options", 00:14:22.160 "params": { 00:14:22.160 "impl_name": "posix", 00:14:22.160 "recv_buf_size": 2097152, 00:14:22.160 "send_buf_size": 2097152, 00:14:22.160 "enable_recv_pipe": true, 00:14:22.160 "enable_quickack": false, 00:14:22.160 "enable_placement_id": 0, 00:14:22.160 "enable_zerocopy_send_server": true, 00:14:22.160 "enable_zerocopy_send_client": false, 00:14:22.160 "zerocopy_threshold": 0, 00:14:22.160 "tls_version": 0, 00:14:22.160 "enable_ktls": false 00:14:22.160 } 00:14:22.160 }, 00:14:22.160 { 00:14:22.160 "method": "sock_impl_set_options", 00:14:22.160 "params": { 00:14:22.160 "impl_name": "uring", 00:14:22.160 "recv_buf_size": 2097152, 00:14:22.160 "send_buf_size": 2097152, 00:14:22.160 "enable_recv_pipe": true, 00:14:22.160 "enable_quickack": false, 00:14:22.160 "enable_placement_id": 0, 00:14:22.160 "enable_zerocopy_send_server": false, 00:14:22.160 "enable_zerocopy_send_client": false, 00:14:22.160 "zerocopy_threshold": 0, 00:14:22.160 "tls_version": 0, 00:14:22.160 "enable_ktls": false 00:14:22.160 } 00:14:22.160 } 00:14:22.160 ] 00:14:22.160 }, 00:14:22.160 { 00:14:22.160 "subsystem": "vmd", 00:14:22.160 "config": [] 00:14:22.160 }, 00:14:22.160 { 00:14:22.160 "subsystem": "accel", 00:14:22.160 "config": [ 00:14:22.160 { 00:14:22.160 "method": "accel_set_options", 00:14:22.160 "params": { 00:14:22.160 "small_cache_size": 128, 00:14:22.160 "large_cache_size": 16, 00:14:22.160 "task_count": 2048, 00:14:22.160 "sequence_count": 2048, 00:14:22.160 "buf_count": 2048 00:14:22.160 } 00:14:22.160 } 00:14:22.160 ] 00:14:22.160 }, 00:14:22.160 { 00:14:22.160 "subsystem": "bdev", 00:14:22.160 "config": [ 00:14:22.160 { 00:14:22.160 "method": "bdev_set_options", 00:14:22.160 "params": { 00:14:22.160 "bdev_io_pool_size": 65535, 00:14:22.160 "bdev_io_cache_size": 256, 00:14:22.160 "bdev_auto_examine": true, 00:14:22.160 "iobuf_small_cache_size": 128, 00:14:22.160 "iobuf_large_cache_size": 16 00:14:22.160 } 00:14:22.160 }, 00:14:22.160 { 00:14:22.160 "method": "bdev_raid_set_options", 00:14:22.160 "params": { 00:14:22.160 "process_window_size_kb": 1024, 00:14:22.160 "process_max_bandwidth_mb_sec": 0 00:14:22.160 } 00:14:22.160 }, 00:14:22.160 { 00:14:22.160 "method": "bdev_iscsi_set_options", 00:14:22.160 "params": { 00:14:22.160 "timeout_sec": 30 00:14:22.160 } 00:14:22.160 }, 00:14:22.160 { 00:14:22.160 "method": "bdev_nvme_set_options", 00:14:22.160 "params": { 00:14:22.160 "action_on_timeout": "none", 00:14:22.160 "timeout_us": 0, 00:14:22.160 "timeout_admin_us": 0, 00:14:22.160 "keep_alive_timeout_ms": 10000, 00:14:22.160 "arbitration_burst": 0, 00:14:22.160 "low_priority_weight": 0, 00:14:22.160 "medium_priority_weight": 0, 00:14:22.160 "high_priority_weight": 0, 00:14:22.160 "nvme_adminq_poll_period_us": 10000, 00:14:22.160 "nvme_ioq_poll_period_us": 0, 00:14:22.160 "io_queue_requests": 512, 00:14:22.160 "delay_cmd_submit": true, 00:14:22.160 "transport_retry_count": 4, 00:14:22.160 "bdev_retry_count": 3, 00:14:22.160 "transport_ack_timeout": 0, 00:14:22.161 "ctrlr_loss_timeout_sec": 0, 00:14:22.161 "reconnect_delay_sec": 0, 00:14:22.161 "fast_io_fail_timeout_sec": 0, 00:14:22.161 "disable_auto_failback": false, 00:14:22.161 "generate_uuids": false, 00:14:22.161 "transport_tos": 0, 00:14:22.161 "nvme_error_stat": false, 00:14:22.161 "rdma_srq_size": 0, 00:14:22.161 "io_path_stat": false, 00:14:22.161 "allow_accel_sequence": false, 00:14:22.161 "rdma_max_cq_size": 0, 00:14:22.161 "rdma_cm_event_timeout_ms": 0, 00:14:22.161 "dhchap_digests": [ 00:14:22.161 "sha256", 00:14:22.161 "sha384", 00:14:22.161 "sha512" 00:14:22.161 ], 00:14:22.161 "dhchap_dhgroups": [ 00:14:22.161 "null", 00:14:22.161 "ffdhe2048", 00:14:22.161 "ffdhe3072", 00:14:22.161 "ffdhe4096", 00:14:22.161 "ffdhe6144", 00:14:22.161 "ffdhe8192" 00:14:22.161 ] 00:14:22.161 } 00:14:22.161 }, 00:14:22.161 { 00:14:22.161 "method": "bdev_nvme_attach_controller", 00:14:22.161 "params": { 00:14:22.161 "name": "nvme0", 00:14:22.161 "trtype": "TCP", 00:14:22.161 "adrfam": "IPv4", 00:14:22.161 "traddr": "10.0.0.3", 00:14:22.161 "trsvcid": "4420", 00:14:22.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.161 "prchk_reftag": false, 00:14:22.161 "prchk_guard": false, 00:14:22.161 "ctrlr_loss_timeout_sec": 0, 00:14:22.161 "reconnect_delay_sec": 0, 00:14:22.161 "fast_io_fail_timeout_sec": 0, 00:14:22.161 "psk": "key0", 00:14:22.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:22.161 "hdgst": false, 00:14:22.161 "ddgst": false, 00:14:22.161 "multipath": "multipath" 00:14:22.161 } 00:14:22.161 }, 00:14:22.161 { 00:14:22.161 "method": "bdev_nvme_set_hotplug", 00:14:22.161 "params": { 00:14:22.161 "period_us": 100000, 00:14:22.161 "enable": false 00:14:22.161 } 00:14:22.161 }, 00:14:22.161 { 00:14:22.161 "method": "bdev_enable_histogram", 00:14:22.161 "params": { 00:14:22.161 "name": "nvme0n1", 00:14:22.161 "enable": true 00:14:22.161 } 00:14:22.161 }, 00:14:22.161 { 00:14:22.161 "method": "bdev_wait_for_examine" 00:14:22.161 } 00:14:22.161 ] 00:14:22.161 }, 00:14:22.161 { 00:14:22.161 "subsystem": "nbd", 00:14:22.161 "config": [] 00:14:22.161 } 00:14:22.161 ] 00:14:22.161 }' 00:14:22.161 [2024-11-26 20:35:22.488854] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:14:22.161 [2024-11-26 20:35:22.489112] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72641 ] 00:14:22.419 [2024-11-26 20:35:22.628831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.419 [2024-11-26 20:35:22.691649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.678 [2024-11-26 20:35:22.827366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:22.678 [2024-11-26 20:35:22.878977] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:23.245 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.245 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:23.245 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:23.245 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:14:23.503 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.503 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:23.761 Running I/O for 1 seconds... 00:14:24.695 3891.00 IOPS, 15.20 MiB/s 00:14:24.695 Latency(us) 00:14:24.695 [2024-11-26T20:35:25.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.695 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:24.695 Verification LBA range: start 0x0 length 0x2000 00:14:24.695 nvme0n1 : 1.02 3928.03 15.34 0.00 0.00 32128.78 5689.72 20137.43 00:14:24.695 [2024-11-26T20:35:25.050Z] =================================================================================================================== 00:14:24.695 [2024-11-26T20:35:25.050Z] Total : 3928.03 15.34 0.00 0.00 32128.78 5689.72 20137.43 00:14:24.695 { 00:14:24.695 "results": [ 00:14:24.695 { 00:14:24.695 "job": "nvme0n1", 00:14:24.695 "core_mask": "0x2", 00:14:24.695 "workload": "verify", 00:14:24.695 "status": "finished", 00:14:24.695 "verify_range": { 00:14:24.695 "start": 0, 00:14:24.695 "length": 8192 00:14:24.695 }, 00:14:24.695 "queue_depth": 128, 00:14:24.695 "io_size": 4096, 00:14:24.695 "runtime": 1.023159, 00:14:24.695 "iops": 3928.030736180789, 00:14:24.695 "mibps": 15.343870063206207, 00:14:24.695 "io_failed": 0, 00:14:24.695 "io_timeout": 0, 00:14:24.695 "avg_latency_us": 32128.779772444523, 00:14:24.695 "min_latency_us": 5689.716363636364, 00:14:24.695 "max_latency_us": 20137.425454545453 00:14:24.695 } 00:14:24.695 ], 00:14:24.695 "core_count": 1 00:14:24.695 } 00:14:24.695 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:14:24.695 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:14:24.696 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:24.696 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:14:24.696 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:14:24.696 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:14:24.696 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:24.696 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:14:24.696 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:14:24.696 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:14:24.696 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:24.696 nvmf_trace.0 00:14:24.696 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:14:24.696 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72641 00:14:24.696 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72641 ']' 00:14:24.696 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72641 00:14:24.696 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:24.696 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:24.696 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72641 00:14:24.696 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:24.696 killing process with pid 72641 00:14:24.696 Received shutdown signal, test time was about 1.000000 seconds 00:14:24.696 00:14:24.696 Latency(us) 00:14:24.696 [2024-11-26T20:35:25.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.696 [2024-11-26T20:35:25.051Z] =================================================================================================================== 00:14:24.696 [2024-11-26T20:35:25.051Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:24.696 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:24.696 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72641' 00:14:24.696 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72641 00:14:24.696 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72641 00:14:24.954 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:24.954 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:24.954 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:14:24.954 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:24.954 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:14:24.954 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:24.954 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:24.954 rmmod nvme_tcp 00:14:24.954 rmmod nvme_fabrics 00:14:24.954 rmmod nvme_keyring 00:14:25.212 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:25.212 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:14:25.212 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:14:25.212 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72609 ']' 00:14:25.212 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72609 00:14:25.212 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72609 ']' 00:14:25.212 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72609 00:14:25.212 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:25.212 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:25.212 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72609 00:14:25.212 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:25.212 killing process with pid 72609 00:14:25.213 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:25.213 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72609' 00:14:25.213 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72609 00:14:25.213 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72609 00:14:25.213 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:25.213 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:25.213 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:25.213 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:14:25.213 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:14:25.213 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:25.213 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:14:25.471 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:25.471 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:25.471 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:25.471 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:25.471 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:25.471 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:25.471 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:25.471 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:25.471 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:25.471 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:25.471 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:25.471 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:25.471 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:25.471 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:25.471 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:25.471 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:25.471 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.471 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:25.471 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.471 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:14:25.471 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Yccnk6Y0pq /tmp/tmp.I5avx2drxz /tmp/tmp.fVGwUt2wMd 00:14:25.471 00:14:25.471 real 1m29.199s 00:14:25.471 user 2m26.277s 00:14:25.471 sys 0m27.699s 00:14:25.471 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.471 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:25.471 ************************************ 00:14:25.471 END TEST nvmf_tls 00:14:25.471 ************************************ 00:14:25.730 20:35:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:25.730 20:35:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:25.730 20:35:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.730 20:35:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:25.730 ************************************ 00:14:25.730 START TEST nvmf_fips 00:14:25.730 ************************************ 00:14:25.730 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:25.730 * Looking for test storage... 00:14:25.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:25.730 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:25.730 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:14:25.730 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:25.730 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:25.730 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:25.730 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:25.730 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:25.730 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:25.730 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:25.730 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:25.730 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:25.730 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:14:25.730 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:14:25.730 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:14:25.730 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:25.730 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:25.730 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:25.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.731 --rc genhtml_branch_coverage=1 00:14:25.731 --rc genhtml_function_coverage=1 00:14:25.731 --rc genhtml_legend=1 00:14:25.731 --rc geninfo_all_blocks=1 00:14:25.731 --rc geninfo_unexecuted_blocks=1 00:14:25.731 00:14:25.731 ' 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:25.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.731 --rc genhtml_branch_coverage=1 00:14:25.731 --rc genhtml_function_coverage=1 00:14:25.731 --rc genhtml_legend=1 00:14:25.731 --rc geninfo_all_blocks=1 00:14:25.731 --rc geninfo_unexecuted_blocks=1 00:14:25.731 00:14:25.731 ' 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:25.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.731 --rc genhtml_branch_coverage=1 00:14:25.731 --rc genhtml_function_coverage=1 00:14:25.731 --rc genhtml_legend=1 00:14:25.731 --rc geninfo_all_blocks=1 00:14:25.731 --rc geninfo_unexecuted_blocks=1 00:14:25.731 00:14:25.731 ' 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:25.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.731 --rc genhtml_branch_coverage=1 00:14:25.731 --rc genhtml_function_coverage=1 00:14:25.731 --rc genhtml_legend=1 00:14:25.731 --rc geninfo_all_blocks=1 00:14:25.731 --rc geninfo_unexecuted_blocks=1 00:14:25.731 00:14:25.731 ' 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:25.731 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:14:25.731 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:25.732 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:25.732 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:25.732 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:14:25.732 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:14:25.732 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:25.732 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:25.732 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:25.732 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:14:25.991 Error setting digest 00:14:25.991 40029262F37F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:14:25.991 40029262F37F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:25.991 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:25.992 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:25.992 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:25.992 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:25.992 Cannot find device "nvmf_init_br" 00:14:25.992 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:25.992 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:25.992 Cannot find device "nvmf_init_br2" 00:14:25.992 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:25.992 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:25.992 Cannot find device "nvmf_tgt_br" 00:14:25.992 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:14:25.992 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:25.992 Cannot find device "nvmf_tgt_br2" 00:14:25.992 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:14:25.992 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:25.992 Cannot find device "nvmf_init_br" 00:14:25.992 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:14:25.992 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:25.992 Cannot find device "nvmf_init_br2" 00:14:25.992 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:14:25.992 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:25.992 Cannot find device "nvmf_tgt_br" 00:14:25.992 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:14:25.992 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:25.992 Cannot find device "nvmf_tgt_br2" 00:14:25.992 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:14:25.992 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:25.992 Cannot find device "nvmf_br" 00:14:25.992 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:14:25.992 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:25.992 Cannot find device "nvmf_init_if" 00:14:25.992 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:14:25.992 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:26.251 Cannot find device "nvmf_init_if2" 00:14:26.251 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:14:26.251 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:26.251 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:26.251 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:14:26.251 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:26.251 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:26.251 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:14:26.251 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:26.251 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:26.251 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:26.251 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:26.251 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:26.251 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:26.251 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:26.251 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:26.251 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:26.251 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:26.251 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:26.252 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:26.252 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:14:26.252 00:14:26.252 --- 10.0.0.3 ping statistics --- 00:14:26.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.252 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:26.252 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:26.252 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:14:26.252 00:14:26.252 --- 10.0.0.4 ping statistics --- 00:14:26.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.252 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:26.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:14:26.252 00:14:26.252 --- 10.0.0.1 ping statistics --- 00:14:26.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.252 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:26.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:14:26.252 00:14:26.252 --- 10.0.0.2 ping statistics --- 00:14:26.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.252 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72948 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72948 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72948 ']' 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.252 20:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:26.511 [2024-11-26 20:35:26.674696] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:14:26.511 [2024-11-26 20:35:26.674777] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.511 [2024-11-26 20:35:26.824685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.771 [2024-11-26 20:35:26.889803] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.771 [2024-11-26 20:35:26.889872] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.771 [2024-11-26 20:35:26.889887] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.771 [2024-11-26 20:35:26.889898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.771 [2024-11-26 20:35:26.889907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.771 [2024-11-26 20:35:26.890386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.771 [2024-11-26 20:35:26.949725] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:27.337 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.595 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:14:27.595 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:27.595 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:27.595 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:27.595 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.595 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:14:27.595 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:27.595 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:14:27.595 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.MJL 00:14:27.595 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:27.595 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.MJL 00:14:27.595 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.MJL 00:14:27.595 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.MJL 00:14:27.595 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:27.854 [2024-11-26 20:35:27.977841] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.854 [2024-11-26 20:35:27.993808] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:27.854 [2024-11-26 20:35:27.994027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:27.854 malloc0 00:14:27.854 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:27.854 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72995 00:14:27.854 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72995 /var/tmp/bdevperf.sock 00:14:27.854 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:27.854 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72995 ']' 00:14:27.854 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:27.854 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:27.854 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:27.854 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.854 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:27.854 [2024-11-26 20:35:28.143930] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:14:27.854 [2024-11-26 20:35:28.144029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72995 ] 00:14:28.112 [2024-11-26 20:35:28.289952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.112 [2024-11-26 20:35:28.346811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.112 [2024-11-26 20:35:28.400416] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:29.045 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.045 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:14:29.045 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.MJL 00:14:29.045 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:29.303 [2024-11-26 20:35:29.625167] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:29.563 TLSTESTn1 00:14:29.563 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:29.563 Running I/O for 10 seconds... 00:14:31.875 3712.00 IOPS, 14.50 MiB/s [2024-11-26T20:35:33.167Z] 3776.00 IOPS, 14.75 MiB/s [2024-11-26T20:35:34.100Z] 3796.67 IOPS, 14.83 MiB/s [2024-11-26T20:35:35.034Z] 3806.75 IOPS, 14.87 MiB/s [2024-11-26T20:35:35.968Z] 3812.40 IOPS, 14.89 MiB/s [2024-11-26T20:35:36.906Z] 3812.17 IOPS, 14.89 MiB/s [2024-11-26T20:35:37.853Z] 3730.71 IOPS, 14.57 MiB/s [2024-11-26T20:35:39.249Z] 3732.38 IOPS, 14.58 MiB/s [2024-11-26T20:35:40.182Z] 3734.44 IOPS, 14.59 MiB/s [2024-11-26T20:35:40.182Z] 3746.10 IOPS, 14.63 MiB/s 00:14:39.827 Latency(us) 00:14:39.827 [2024-11-26T20:35:40.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.827 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:39.827 Verification LBA range: start 0x0 length 0x2000 00:14:39.827 TLSTESTn1 : 10.02 3752.07 14.66 0.00 0.00 34051.51 6970.65 36700.16 00:14:39.827 [2024-11-26T20:35:40.182Z] =================================================================================================================== 00:14:39.827 [2024-11-26T20:35:40.182Z] Total : 3752.07 14.66 0.00 0.00 34051.51 6970.65 36700.16 00:14:39.827 { 00:14:39.827 "results": [ 00:14:39.827 { 00:14:39.827 "job": "TLSTESTn1", 00:14:39.827 "core_mask": "0x4", 00:14:39.827 "workload": "verify", 00:14:39.827 "status": "finished", 00:14:39.827 "verify_range": { 00:14:39.827 "start": 0, 00:14:39.827 "length": 8192 00:14:39.827 }, 00:14:39.827 "queue_depth": 128, 00:14:39.828 "io_size": 4096, 00:14:39.828 "runtime": 10.017412, 00:14:39.828 "iops": 3752.0669011117843, 00:14:39.828 "mibps": 14.656511332467907, 00:14:39.828 "io_failed": 0, 00:14:39.828 "io_timeout": 0, 00:14:39.828 "avg_latency_us": 34051.50672310289, 00:14:39.828 "min_latency_us": 6970.647272727273, 00:14:39.828 "max_latency_us": 36700.16 00:14:39.828 } 00:14:39.828 ], 00:14:39.828 "core_count": 1 00:14:39.828 } 00:14:39.828 20:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:39.828 20:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:39.828 20:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:14:39.828 20:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:14:39.828 20:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:14:39.828 20:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:39.828 20:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:14:39.828 20:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:14:39.828 20:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:14:39.828 20:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:39.828 nvmf_trace.0 00:14:39.828 20:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:14:39.828 20:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72995 00:14:39.828 20:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72995 ']' 00:14:39.828 20:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72995 00:14:39.828 20:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:14:39.828 20:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:39.828 20:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72995 00:14:39.828 20:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:39.828 20:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:39.828 killing process with pid 72995 00:14:39.828 20:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72995' 00:14:39.828 Received shutdown signal, test time was about 10.000000 seconds 00:14:39.828 00:14:39.828 Latency(us) 00:14:39.828 [2024-11-26T20:35:40.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.828 [2024-11-26T20:35:40.183Z] =================================================================================================================== 00:14:39.828 [2024-11-26T20:35:40.183Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:39.828 20:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72995 00:14:39.828 20:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72995 00:14:39.828 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:39.828 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:39.828 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:14:40.086 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:40.086 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:14:40.086 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:40.086 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:40.086 rmmod nvme_tcp 00:14:40.086 rmmod nvme_fabrics 00:14:40.086 rmmod nvme_keyring 00:14:40.086 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:40.086 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:14:40.086 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:14:40.086 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72948 ']' 00:14:40.086 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72948 00:14:40.086 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72948 ']' 00:14:40.086 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72948 00:14:40.086 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:14:40.086 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:40.086 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72948 00:14:40.086 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:40.086 killing process with pid 72948 00:14:40.086 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:40.086 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72948' 00:14:40.086 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72948 00:14:40.086 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72948 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.344 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.602 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:14:40.602 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.MJL 00:14:40.602 00:14:40.602 real 0m14.879s 00:14:40.602 user 0m20.759s 00:14:40.602 sys 0m5.735s 00:14:40.602 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:40.602 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:40.602 ************************************ 00:14:40.602 END TEST nvmf_fips 00:14:40.602 ************************************ 00:14:40.602 20:35:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:40.602 20:35:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:40.602 20:35:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:40.602 20:35:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:40.602 ************************************ 00:14:40.602 START TEST nvmf_control_msg_list 00:14:40.602 ************************************ 00:14:40.602 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:40.602 * Looking for test storage... 00:14:40.602 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:40.602 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:40.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.603 --rc genhtml_branch_coverage=1 00:14:40.603 --rc genhtml_function_coverage=1 00:14:40.603 --rc genhtml_legend=1 00:14:40.603 --rc geninfo_all_blocks=1 00:14:40.603 --rc geninfo_unexecuted_blocks=1 00:14:40.603 00:14:40.603 ' 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:40.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.603 --rc genhtml_branch_coverage=1 00:14:40.603 --rc genhtml_function_coverage=1 00:14:40.603 --rc genhtml_legend=1 00:14:40.603 --rc geninfo_all_blocks=1 00:14:40.603 --rc geninfo_unexecuted_blocks=1 00:14:40.603 00:14:40.603 ' 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:40.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.603 --rc genhtml_branch_coverage=1 00:14:40.603 --rc genhtml_function_coverage=1 00:14:40.603 --rc genhtml_legend=1 00:14:40.603 --rc geninfo_all_blocks=1 00:14:40.603 --rc geninfo_unexecuted_blocks=1 00:14:40.603 00:14:40.603 ' 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:40.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.603 --rc genhtml_branch_coverage=1 00:14:40.603 --rc genhtml_function_coverage=1 00:14:40.603 --rc genhtml_legend=1 00:14:40.603 --rc geninfo_all_blocks=1 00:14:40.603 --rc geninfo_unexecuted_blocks=1 00:14:40.603 00:14:40.603 ' 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:40.603 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:40.861 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:40.861 Cannot find device "nvmf_init_br" 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:40.861 Cannot find device "nvmf_init_br2" 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:14:40.861 20:35:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:40.861 Cannot find device "nvmf_tgt_br" 00:14:40.861 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:14:40.861 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:40.861 Cannot find device "nvmf_tgt_br2" 00:14:40.861 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:14:40.861 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:40.861 Cannot find device "nvmf_init_br" 00:14:40.861 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:14:40.861 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:40.861 Cannot find device "nvmf_init_br2" 00:14:40.861 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:14:40.861 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:40.861 Cannot find device "nvmf_tgt_br" 00:14:40.861 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:14:40.861 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:40.861 Cannot find device "nvmf_tgt_br2" 00:14:40.861 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:14:40.861 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:40.861 Cannot find device "nvmf_br" 00:14:40.861 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:14:40.861 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:40.861 Cannot find device "nvmf_init_if" 00:14:40.861 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:14:40.861 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:40.861 Cannot find device "nvmf_init_if2" 00:14:40.861 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:14:40.861 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:40.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:40.862 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:14:40.862 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:40.862 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:40.862 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:14:40.862 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:40.862 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:40.862 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:40.862 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:40.862 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:40.862 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:40.862 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:40.862 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:40.862 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:40.862 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:40.862 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:40.862 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:40.862 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:40.862 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:40.862 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:40.862 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:41.119 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:41.119 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:14:41.119 00:14:41.119 --- 10.0.0.3 ping statistics --- 00:14:41.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.119 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:41.119 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:41.119 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:14:41.119 00:14:41.119 --- 10.0.0.4 ping statistics --- 00:14:41.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.119 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:41.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:14:41.119 00:14:41.119 --- 10.0.0.1 ping statistics --- 00:14:41.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.119 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:41.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:14:41.119 00:14:41.119 --- 10.0.0.2 ping statistics --- 00:14:41.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.119 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:14:41.119 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:41.120 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:41.120 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:41.120 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73385 00:14:41.120 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:41.120 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73385 00:14:41.120 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 73385 ']' 00:14:41.120 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.120 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:41.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.120 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.120 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:41.120 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:41.120 [2024-11-26 20:35:41.426193] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:14:41.120 [2024-11-26 20:35:41.426300] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.377 [2024-11-26 20:35:41.579945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.377 [2024-11-26 20:35:41.645092] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.377 [2024-11-26 20:35:41.645143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.378 [2024-11-26 20:35:41.645157] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.378 [2024-11-26 20:35:41.645167] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.378 [2024-11-26 20:35:41.645177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.378 [2024-11-26 20:35:41.645644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.378 [2024-11-26 20:35:41.702552] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:41.635 [2024-11-26 20:35:41.819063] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:41.635 Malloc0 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:41.635 [2024-11-26 20:35:41.858996] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73409 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73410 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73411 00:14:41.635 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:41.636 20:35:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73409 00:14:41.893 [2024-11-26 20:35:42.047351] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:41.893 [2024-11-26 20:35:42.057508] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:41.893 [2024-11-26 20:35:42.067862] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:42.829 Initializing NVMe Controllers 00:14:42.829 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:42.829 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:14:42.829 Initialization complete. Launching workers. 00:14:42.829 ======================================================== 00:14:42.829 Latency(us) 00:14:42.829 Device Information : IOPS MiB/s Average min max 00:14:42.829 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3396.00 13.27 294.09 143.85 826.23 00:14:42.829 ======================================================== 00:14:42.829 Total : 3396.00 13.27 294.09 143.85 826.23 00:14:42.829 00:14:42.829 Initializing NVMe Controllers 00:14:42.829 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:42.829 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:14:42.829 Initialization complete. Launching workers. 00:14:42.829 ======================================================== 00:14:42.829 Latency(us) 00:14:42.829 Device Information : IOPS MiB/s Average min max 00:14:42.829 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3411.00 13.32 292.80 193.02 826.53 00:14:42.829 ======================================================== 00:14:42.829 Total : 3411.00 13.32 292.80 193.02 826.53 00:14:42.829 00:14:42.829 Initializing NVMe Controllers 00:14:42.829 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:42.829 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:14:42.829 Initialization complete. Launching workers. 00:14:42.829 ======================================================== 00:14:42.829 Latency(us) 00:14:42.829 Device Information : IOPS MiB/s Average min max 00:14:42.829 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3453.94 13.49 289.13 124.23 825.61 00:14:42.829 ======================================================== 00:14:42.829 Total : 3453.94 13.49 289.13 124.23 825.61 00:14:42.829 00:14:42.829 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73410 00:14:42.829 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73411 00:14:42.829 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:42.829 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:14:42.829 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:42.829 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:14:42.829 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:42.829 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:14:42.829 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:42.829 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:42.829 rmmod nvme_tcp 00:14:42.829 rmmod nvme_fabrics 00:14:42.829 rmmod nvme_keyring 00:14:43.088 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:43.088 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:14:43.088 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:14:43.088 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73385 ']' 00:14:43.088 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73385 00:14:43.088 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 73385 ']' 00:14:43.088 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 73385 00:14:43.088 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:14:43.088 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:43.088 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73385 00:14:43.088 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:43.088 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:43.088 killing process with pid 73385 00:14:43.088 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73385' 00:14:43.088 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 73385 00:14:43.088 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 73385 00:14:43.088 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:43.088 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:43.088 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:43.088 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:14:43.347 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:43.347 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:14:43.347 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:14:43.347 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:43.347 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:43.347 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:43.347 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:43.347 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:43.347 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:43.347 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:43.347 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:43.347 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:43.347 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:43.347 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:43.347 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:43.347 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:43.347 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:43.347 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:43.347 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:43.347 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.347 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:43.347 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.347 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:14:43.347 00:14:43.347 real 0m2.903s 00:14:43.347 user 0m4.858s 00:14:43.347 sys 0m1.291s 00:14:43.347 ************************************ 00:14:43.347 END TEST nvmf_control_msg_list 00:14:43.347 ************************************ 00:14:43.347 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.347 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:43.606 ************************************ 00:14:43.606 START TEST nvmf_wait_for_buf 00:14:43.606 ************************************ 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:43.606 * Looking for test storage... 00:14:43.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:43.606 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:43.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.607 --rc genhtml_branch_coverage=1 00:14:43.607 --rc genhtml_function_coverage=1 00:14:43.607 --rc genhtml_legend=1 00:14:43.607 --rc geninfo_all_blocks=1 00:14:43.607 --rc geninfo_unexecuted_blocks=1 00:14:43.607 00:14:43.607 ' 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:43.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.607 --rc genhtml_branch_coverage=1 00:14:43.607 --rc genhtml_function_coverage=1 00:14:43.607 --rc genhtml_legend=1 00:14:43.607 --rc geninfo_all_blocks=1 00:14:43.607 --rc geninfo_unexecuted_blocks=1 00:14:43.607 00:14:43.607 ' 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:43.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.607 --rc genhtml_branch_coverage=1 00:14:43.607 --rc genhtml_function_coverage=1 00:14:43.607 --rc genhtml_legend=1 00:14:43.607 --rc geninfo_all_blocks=1 00:14:43.607 --rc geninfo_unexecuted_blocks=1 00:14:43.607 00:14:43.607 ' 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:43.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.607 --rc genhtml_branch_coverage=1 00:14:43.607 --rc genhtml_function_coverage=1 00:14:43.607 --rc genhtml_legend=1 00:14:43.607 --rc geninfo_all_blocks=1 00:14:43.607 --rc geninfo_unexecuted_blocks=1 00:14:43.607 00:14:43.607 ' 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:43.607 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:43.607 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:43.608 Cannot find device "nvmf_init_br" 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:43.608 Cannot find device "nvmf_init_br2" 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:14:43.608 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:43.868 Cannot find device "nvmf_tgt_br" 00:14:43.868 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:14:43.868 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:43.868 Cannot find device "nvmf_tgt_br2" 00:14:43.868 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:14:43.868 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:43.868 Cannot find device "nvmf_init_br" 00:14:43.868 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:14:43.868 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:43.868 Cannot find device "nvmf_init_br2" 00:14:43.868 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:14:43.868 20:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:43.868 Cannot find device "nvmf_tgt_br" 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:43.868 Cannot find device "nvmf_tgt_br2" 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:43.868 Cannot find device "nvmf_br" 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:43.868 Cannot find device "nvmf_init_if" 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:43.868 Cannot find device "nvmf_init_if2" 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:43.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:43.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:43.868 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:44.127 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:44.127 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:44.127 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:44.127 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:44.127 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:44.128 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:44.128 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:14:44.128 00:14:44.128 --- 10.0.0.3 ping statistics --- 00:14:44.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.128 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:44.128 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:44.128 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:14:44.128 00:14:44.128 --- 10.0.0.4 ping statistics --- 00:14:44.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.128 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:44.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:44.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:14:44.128 00:14:44.128 --- 10.0.0.1 ping statistics --- 00:14:44.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.128 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:44.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:44.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:14:44.128 00:14:44.128 --- 10.0.0.2 ping statistics --- 00:14:44.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.128 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73653 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73653 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 73653 ']' 00:14:44.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.128 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:44.128 [2024-11-26 20:35:44.351398] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:14:44.128 [2024-11-26 20:35:44.351735] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.387 [2024-11-26 20:35:44.501865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.387 [2024-11-26 20:35:44.560771] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.387 [2024-11-26 20:35:44.561055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.387 [2024-11-26 20:35:44.561184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.387 [2024-11-26 20:35:44.561266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.387 [2024-11-26 20:35:44.561372] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.387 [2024-11-26 20:35:44.561804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.387 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.387 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:14:44.387 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:44.387 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:44.387 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:44.387 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.387 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:44.387 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:44.387 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:14:44.387 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.387 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:44.387 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.387 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:14:44.387 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.387 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:44.387 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.387 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:14:44.387 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.387 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:44.387 [2024-11-26 20:35:44.710351] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:44.645 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.646 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:44.646 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.646 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:44.646 Malloc0 00:14:44.646 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.646 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:14:44.646 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.646 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:44.646 [2024-11-26 20:35:44.782244] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.646 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.646 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:14:44.646 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.646 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:44.646 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.646 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:44.646 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.646 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:44.646 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.646 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:44.646 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.646 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:44.646 [2024-11-26 20:35:44.806299] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:44.646 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.646 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:44.903 [2024-11-26 20:35:45.009367] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:46.280 Initializing NVMe Controllers 00:14:46.280 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:46.280 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:14:46.280 Initialization complete. Launching workers. 00:14:46.280 ======================================================== 00:14:46.280 Latency(us) 00:14:46.280 Device Information : IOPS MiB/s Average min max 00:14:46.280 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 508.00 63.50 7921.62 3964.74 10969.24 00:14:46.280 ======================================================== 00:14:46.280 Total : 508.00 63.50 7921.62 3964.74 10969.24 00:14:46.280 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4826 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4826 -eq 0 ]] 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:46.280 rmmod nvme_tcp 00:14:46.280 rmmod nvme_fabrics 00:14:46.280 rmmod nvme_keyring 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73653 ']' 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73653 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 73653 ']' 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 73653 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73653 00:14:46.280 killing process with pid 73653 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73653' 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 73653 00:14:46.280 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 73653 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:46.540 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.799 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:14:46.799 ************************************ 00:14:46.799 END TEST nvmf_wait_for_buf 00:14:46.799 ************************************ 00:14:46.799 00:14:46.799 real 0m3.197s 00:14:46.799 user 0m2.592s 00:14:46.799 sys 0m0.762s 00:14:46.799 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:46.799 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:46.799 20:35:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:14:46.799 20:35:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:14:46.799 20:35:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:14:46.799 20:35:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:46.799 20:35:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:46.799 20:35:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:46.799 ************************************ 00:14:46.799 START TEST nvmf_nsid 00:14:46.799 ************************************ 00:14:46.799 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:14:46.799 * Looking for test storage... 00:14:46.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:46.799 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:46.799 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:14:46.799 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:46.799 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:46.799 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:46.799 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:46.799 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:46.799 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:14:46.799 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:14:46.799 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:14:46.799 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:14:46.799 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:14:46.799 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:14:46.799 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:14:46.799 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:46.799 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:14:46.799 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:14:46.799 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:46.799 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:46.799 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:14:46.799 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:14:46.800 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:46.800 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:14:46.800 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:46.800 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:14:46.800 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:14:46.800 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:46.800 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:14:46.800 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:46.800 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:46.800 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:46.800 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:14:46.800 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:46.800 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:46.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.800 --rc genhtml_branch_coverage=1 00:14:46.800 --rc genhtml_function_coverage=1 00:14:46.800 --rc genhtml_legend=1 00:14:46.800 --rc geninfo_all_blocks=1 00:14:46.800 --rc geninfo_unexecuted_blocks=1 00:14:46.800 00:14:46.800 ' 00:14:46.800 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:46.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.800 --rc genhtml_branch_coverage=1 00:14:46.800 --rc genhtml_function_coverage=1 00:14:46.800 --rc genhtml_legend=1 00:14:46.800 --rc geninfo_all_blocks=1 00:14:46.800 --rc geninfo_unexecuted_blocks=1 00:14:46.800 00:14:46.800 ' 00:14:46.800 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:46.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.800 --rc genhtml_branch_coverage=1 00:14:46.800 --rc genhtml_function_coverage=1 00:14:46.800 --rc genhtml_legend=1 00:14:46.800 --rc geninfo_all_blocks=1 00:14:46.800 --rc geninfo_unexecuted_blocks=1 00:14:46.800 00:14:46.800 ' 00:14:46.800 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:46.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.800 --rc genhtml_branch_coverage=1 00:14:46.800 --rc genhtml_function_coverage=1 00:14:46.800 --rc genhtml_legend=1 00:14:46.800 --rc geninfo_all_blocks=1 00:14:46.800 --rc geninfo_unexecuted_blocks=1 00:14:46.800 00:14:46.800 ' 00:14:46.800 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:47.059 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:47.059 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:47.060 Cannot find device "nvmf_init_br" 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:47.060 Cannot find device "nvmf_init_br2" 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:47.060 Cannot find device "nvmf_tgt_br" 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:47.060 Cannot find device "nvmf_tgt_br2" 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:47.060 Cannot find device "nvmf_init_br" 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:47.060 Cannot find device "nvmf_init_br2" 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:47.060 Cannot find device "nvmf_tgt_br" 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:47.060 Cannot find device "nvmf_tgt_br2" 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:47.060 Cannot find device "nvmf_br" 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:47.060 Cannot find device "nvmf_init_if" 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:47.060 Cannot find device "nvmf_init_if2" 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:47.060 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:47.060 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:47.060 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:47.318 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:47.318 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:47.318 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:47.318 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:47.318 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:47.318 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:47.318 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:47.318 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:47.318 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:47.318 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:47.318 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:47.318 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:47.319 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:47.319 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:14:47.319 00:14:47.319 --- 10.0.0.3 ping statistics --- 00:14:47.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.319 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:47.319 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:47.319 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:14:47.319 00:14:47.319 --- 10.0.0.4 ping statistics --- 00:14:47.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.319 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:47.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:47.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:14:47.319 00:14:47.319 --- 10.0.0.1 ping statistics --- 00:14:47.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.319 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:47.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:47.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:14:47.319 00:14:47.319 --- 10.0.0.2 ping statistics --- 00:14:47.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.319 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73912 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73912 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73912 ']' 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:47.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:47.319 20:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:47.319 [2024-11-26 20:35:47.647991] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:14:47.319 [2024-11-26 20:35:47.648087] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.577 [2024-11-26 20:35:47.802994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.577 [2024-11-26 20:35:47.869085] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.577 [2024-11-26 20:35:47.869149] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.577 [2024-11-26 20:35:47.869164] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.577 [2024-11-26 20:35:47.869175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.578 [2024-11-26 20:35:47.869185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.578 [2024-11-26 20:35:47.869678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.578 [2024-11-26 20:35:47.927832] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:48.513 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:48.513 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:14:48.513 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:48.513 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:48.513 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:48.513 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.513 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:48.513 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73944 00:14:48.513 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=e15efa47-e2fd-48d9-bd27-b71a6f21b46f 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=3645a950-d5b2-495c-be06-0756c774a0db 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=57ab473e-c43a-433b-9723-dbe0df20b015 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:48.514 null0 00:14:48.514 null1 00:14:48.514 null2 00:14:48.514 [2024-11-26 20:35:48.772978] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.514 [2024-11-26 20:35:48.785082] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:14:48.514 [2024-11-26 20:35:48.785667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73944 ] 00:14:48.514 [2024-11-26 20:35:48.797141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:48.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73944 /var/tmp/tgt2.sock 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73944 ']' 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:48.514 20:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:48.773 [2024-11-26 20:35:48.928703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.773 [2024-11-26 20:35:48.990844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.773 [2024-11-26 20:35:49.061377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:49.032 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:49.032 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:14:49.032 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:14:49.291 [2024-11-26 20:35:49.629366] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.549 [2024-11-26 20:35:49.645487] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:14:49.549 nvme0n1 nvme0n2 00:14:49.549 nvme1n1 00:14:49.549 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:14:49.549 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:14:49.549 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid=310b31eb-b117-4685-b95a-c58b48fd3835 00:14:49.549 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:14:49.549 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:14:49.549 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:14:49.549 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:14:49.549 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:14:49.549 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:14:49.549 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:14:49.549 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:14:49.549 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:49.549 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:49.549 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:14:49.549 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:14:49.549 20:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:14:50.514 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:50.514 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:50.514 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:50.514 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:50.774 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:14:50.774 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid e15efa47-e2fd-48d9-bd27-b71a6f21b46f 00:14:50.774 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:50.774 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:14:50.774 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:14:50.774 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:50.774 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:14:50.774 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e15efa47e2fd48d9bd27b71a6f21b46f 00:14:50.774 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E15EFA47E2FD48D9BD27B71A6F21B46F 00:14:50.774 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ E15EFA47E2FD48D9BD27B71A6F21B46F == \E\1\5\E\F\A\4\7\E\2\F\D\4\8\D\9\B\D\2\7\B\7\1\A\6\F\2\1\B\4\6\F ]] 00:14:50.774 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:14:50.774 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:14:50.774 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:50.774 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:14:50.774 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:50.774 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:14:50.774 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:14:50.774 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 3645a950-d5b2-495c-be06-0756c774a0db 00:14:50.774 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:50.774 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:14:50.774 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:14:50.774 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:14:50.774 20:35:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:50.774 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3645a950d5b2495cbe060756c774a0db 00:14:50.774 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3645A950D5B2495CBE060756C774A0DB 00:14:50.774 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 3645A950D5B2495CBE060756C774A0DB == \3\6\4\5\A\9\5\0\D\5\B\2\4\9\5\C\B\E\0\6\0\7\5\6\C\7\7\4\A\0\D\B ]] 00:14:50.774 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:14:50.774 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:14:50.774 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:50.774 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:14:50.774 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:50.774 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:14:50.774 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:14:50.774 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 57ab473e-c43a-433b-9723-dbe0df20b015 00:14:50.774 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:50.774 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:14:50.774 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:14:50.774 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:14:50.774 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:50.774 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=57ab473ec43a433b9723dbe0df20b015 00:14:50.774 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 57AB473EC43A433B9723DBE0DF20B015 00:14:50.774 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 57AB473EC43A433B9723DBE0DF20B015 == \5\7\A\B\4\7\3\E\C\4\3\A\4\3\3\B\9\7\2\3\D\B\E\0\D\F\2\0\B\0\1\5 ]] 00:14:50.774 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:14:51.034 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:14:51.034 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:14:51.034 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73944 00:14:51.034 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73944 ']' 00:14:51.034 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73944 00:14:51.034 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:14:51.034 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:51.034 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73944 00:14:51.034 killing process with pid 73944 00:14:51.034 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:51.034 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:51.034 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73944' 00:14:51.034 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73944 00:14:51.034 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73944 00:14:51.601 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:14:51.601 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:51.601 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:14:51.601 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:51.601 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:14:51.601 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:51.601 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:51.601 rmmod nvme_tcp 00:14:51.601 rmmod nvme_fabrics 00:14:51.601 rmmod nvme_keyring 00:14:51.601 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:51.601 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:14:51.601 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:14:51.601 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73912 ']' 00:14:51.601 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73912 00:14:51.601 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73912 ']' 00:14:51.601 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73912 00:14:51.601 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:14:51.601 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:51.601 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73912 00:14:51.601 killing process with pid 73912 00:14:51.601 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:51.601 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:51.601 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73912' 00:14:51.601 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73912 00:14:51.601 20:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73912 00:14:51.861 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:51.861 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:51.861 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:51.861 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:14:51.861 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:14:51.861 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:14:51.861 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:51.861 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:51.861 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:51.861 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:51.861 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:51.861 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:51.861 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:51.861 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:51.861 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:51.861 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:51.861 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:51.861 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:51.861 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:51.861 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:51.861 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:52.119 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:52.119 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:52.119 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.119 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.119 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.119 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:14:52.119 00:14:52.119 real 0m5.303s 00:14:52.119 user 0m7.668s 00:14:52.119 sys 0m1.738s 00:14:52.119 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:52.119 ************************************ 00:14:52.119 END TEST nvmf_nsid 00:14:52.119 20:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:52.119 ************************************ 00:14:52.119 20:35:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:52.119 ************************************ 00:14:52.119 END TEST nvmf_target_extra 00:14:52.119 ************************************ 00:14:52.119 00:14:52.119 real 5m19.893s 00:14:52.119 user 11m12.910s 00:14:52.119 sys 1m9.821s 00:14:52.119 20:35:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:52.119 20:35:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:52.119 20:35:52 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:52.119 20:35:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:52.119 20:35:52 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:52.119 20:35:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:52.119 ************************************ 00:14:52.119 START TEST nvmf_host 00:14:52.119 ************************************ 00:14:52.119 20:35:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:52.119 * Looking for test storage... 00:14:52.119 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:52.119 20:35:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:52.119 20:35:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:52.119 20:35:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:52.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.379 --rc genhtml_branch_coverage=1 00:14:52.379 --rc genhtml_function_coverage=1 00:14:52.379 --rc genhtml_legend=1 00:14:52.379 --rc geninfo_all_blocks=1 00:14:52.379 --rc geninfo_unexecuted_blocks=1 00:14:52.379 00:14:52.379 ' 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:52.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.379 --rc genhtml_branch_coverage=1 00:14:52.379 --rc genhtml_function_coverage=1 00:14:52.379 --rc genhtml_legend=1 00:14:52.379 --rc geninfo_all_blocks=1 00:14:52.379 --rc geninfo_unexecuted_blocks=1 00:14:52.379 00:14:52.379 ' 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:52.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.379 --rc genhtml_branch_coverage=1 00:14:52.379 --rc genhtml_function_coverage=1 00:14:52.379 --rc genhtml_legend=1 00:14:52.379 --rc geninfo_all_blocks=1 00:14:52.379 --rc geninfo_unexecuted_blocks=1 00:14:52.379 00:14:52.379 ' 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:52.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.379 --rc genhtml_branch_coverage=1 00:14:52.379 --rc genhtml_function_coverage=1 00:14:52.379 --rc genhtml_legend=1 00:14:52.379 --rc geninfo_all_blocks=1 00:14:52.379 --rc geninfo_unexecuted_blocks=1 00:14:52.379 00:14:52.379 ' 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:52.379 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:52.379 20:35:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:52.379 ************************************ 00:14:52.379 START TEST nvmf_identify 00:14:52.379 ************************************ 00:14:52.380 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:52.380 * Looking for test storage... 00:14:52.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:52.380 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:52.380 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:14:52.380 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:52.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.639 --rc genhtml_branch_coverage=1 00:14:52.639 --rc genhtml_function_coverage=1 00:14:52.639 --rc genhtml_legend=1 00:14:52.639 --rc geninfo_all_blocks=1 00:14:52.639 --rc geninfo_unexecuted_blocks=1 00:14:52.639 00:14:52.639 ' 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:52.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.639 --rc genhtml_branch_coverage=1 00:14:52.639 --rc genhtml_function_coverage=1 00:14:52.639 --rc genhtml_legend=1 00:14:52.639 --rc geninfo_all_blocks=1 00:14:52.639 --rc geninfo_unexecuted_blocks=1 00:14:52.639 00:14:52.639 ' 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:52.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.639 --rc genhtml_branch_coverage=1 00:14:52.639 --rc genhtml_function_coverage=1 00:14:52.639 --rc genhtml_legend=1 00:14:52.639 --rc geninfo_all_blocks=1 00:14:52.639 --rc geninfo_unexecuted_blocks=1 00:14:52.639 00:14:52.639 ' 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:52.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.639 --rc genhtml_branch_coverage=1 00:14:52.639 --rc genhtml_function_coverage=1 00:14:52.639 --rc genhtml_legend=1 00:14:52.639 --rc geninfo_all_blocks=1 00:14:52.639 --rc geninfo_unexecuted_blocks=1 00:14:52.639 00:14:52.639 ' 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.639 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:52.640 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:52.640 Cannot find device "nvmf_init_br" 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:52.640 Cannot find device "nvmf_init_br2" 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:52.640 Cannot find device "nvmf_tgt_br" 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:52.640 Cannot find device "nvmf_tgt_br2" 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:52.640 Cannot find device "nvmf_init_br" 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:52.640 Cannot find device "nvmf_init_br2" 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:52.640 Cannot find device "nvmf_tgt_br" 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:52.640 Cannot find device "nvmf_tgt_br2" 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:52.640 Cannot find device "nvmf_br" 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:52.640 Cannot find device "nvmf_init_if" 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:52.640 Cannot find device "nvmf_init_if2" 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:52.640 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:52.640 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:52.640 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:52.641 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:52.641 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:52.899 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:52.899 20:35:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:52.899 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:52.899 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:52.899 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:52.899 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:52.900 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:52.900 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:14:52.900 00:14:52.900 --- 10.0.0.3 ping statistics --- 00:14:52.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.900 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:52.900 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:52.900 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:14:52.900 00:14:52.900 --- 10.0.0.4 ping statistics --- 00:14:52.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.900 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:52.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:52.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:52.900 00:14:52.900 --- 10.0.0.1 ping statistics --- 00:14:52.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.900 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:52.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:52.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:14:52.900 00:14:52.900 --- 10.0.0.2 ping statistics --- 00:14:52.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.900 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:52.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74293 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74293 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 74293 ']' 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:52.900 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:53.159 [2024-11-26 20:35:53.300277] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:14:53.159 [2024-11-26 20:35:53.300605] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.159 [2024-11-26 20:35:53.455182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:53.418 [2024-11-26 20:35:53.521244] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.419 [2024-11-26 20:35:53.521539] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.419 [2024-11-26 20:35:53.521724] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.419 [2024-11-26 20:35:53.521908] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.419 [2024-11-26 20:35:53.521952] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.419 [2024-11-26 20:35:53.523288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.419 [2024-11-26 20:35:53.523381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.419 [2024-11-26 20:35:53.524098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:53.419 [2024-11-26 20:35:53.524110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.419 [2024-11-26 20:35:53.581685] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:53.419 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:53.419 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:14:53.419 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:53.419 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.419 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:53.419 [2024-11-26 20:35:53.659520] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.419 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.419 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:53.419 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:53.419 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:53.419 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:53.419 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.419 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:53.419 Malloc0 00:14:53.419 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.419 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:53.419 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.419 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:53.681 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.681 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:53.681 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.681 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:53.681 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.681 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:53.681 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.681 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:53.681 [2024-11-26 20:35:53.785294] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:53.681 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.681 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:53.681 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.681 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:53.681 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.681 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:53.681 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.681 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:53.681 [ 00:14:53.681 { 00:14:53.681 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:53.681 "subtype": "Discovery", 00:14:53.681 "listen_addresses": [ 00:14:53.681 { 00:14:53.681 "trtype": "TCP", 00:14:53.681 "adrfam": "IPv4", 00:14:53.681 "traddr": "10.0.0.3", 00:14:53.681 "trsvcid": "4420" 00:14:53.681 } 00:14:53.681 ], 00:14:53.681 "allow_any_host": true, 00:14:53.681 "hosts": [] 00:14:53.681 }, 00:14:53.681 { 00:14:53.681 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.681 "subtype": "NVMe", 00:14:53.681 "listen_addresses": [ 00:14:53.681 { 00:14:53.681 "trtype": "TCP", 00:14:53.681 "adrfam": "IPv4", 00:14:53.681 "traddr": "10.0.0.3", 00:14:53.681 "trsvcid": "4420" 00:14:53.681 } 00:14:53.681 ], 00:14:53.681 "allow_any_host": true, 00:14:53.681 "hosts": [], 00:14:53.681 "serial_number": "SPDK00000000000001", 00:14:53.681 "model_number": "SPDK bdev Controller", 00:14:53.681 "max_namespaces": 32, 00:14:53.681 "min_cntlid": 1, 00:14:53.681 "max_cntlid": 65519, 00:14:53.681 "namespaces": [ 00:14:53.681 { 00:14:53.681 "nsid": 1, 00:14:53.681 "bdev_name": "Malloc0", 00:14:53.681 "name": "Malloc0", 00:14:53.681 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:53.681 "eui64": "ABCDEF0123456789", 00:14:53.681 "uuid": "8962f13d-374d-417d-8505-bd2efd325071" 00:14:53.681 } 00:14:53.681 ] 00:14:53.681 } 00:14:53.681 ] 00:14:53.681 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.681 20:35:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:53.681 [2024-11-26 20:35:53.842682] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:14:53.681 [2024-11-26 20:35:53.842738] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74322 ] 00:14:53.681 [2024-11-26 20:35:54.004940] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:14:53.681 [2024-11-26 20:35:54.005006] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:53.681 [2024-11-26 20:35:54.005013] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:53.681 [2024-11-26 20:35:54.005029] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:53.681 [2024-11-26 20:35:54.005041] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:53.681 [2024-11-26 20:35:54.005388] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:14:53.681 [2024-11-26 20:35:54.005450] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1554750 0 00:14:53.681 [2024-11-26 20:35:54.012247] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:53.681 [2024-11-26 20:35:54.012271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:53.681 [2024-11-26 20:35:54.012277] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:53.681 [2024-11-26 20:35:54.012281] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:53.681 [2024-11-26 20:35:54.012319] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.681 [2024-11-26 20:35:54.012326] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.681 [2024-11-26 20:35:54.012331] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1554750) 00:14:53.681 [2024-11-26 20:35:54.012345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:53.681 [2024-11-26 20:35:54.012377] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8740, cid 0, qid 0 00:14:53.681 [2024-11-26 20:35:54.020286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.681 [2024-11-26 20:35:54.020307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.681 [2024-11-26 20:35:54.020312] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.681 [2024-11-26 20:35:54.020318] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8740) on tqpair=0x1554750 00:14:53.681 [2024-11-26 20:35:54.020333] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:53.681 [2024-11-26 20:35:54.020342] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:14:53.681 [2024-11-26 20:35:54.020354] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:14:53.681 [2024-11-26 20:35:54.020373] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.681 [2024-11-26 20:35:54.020379] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.681 [2024-11-26 20:35:54.020383] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1554750) 00:14:53.681 [2024-11-26 20:35:54.020393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.681 [2024-11-26 20:35:54.020422] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8740, cid 0, qid 0 00:14:53.681 [2024-11-26 20:35:54.020485] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.681 [2024-11-26 20:35:54.020492] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.681 [2024-11-26 20:35:54.020496] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.681 [2024-11-26 20:35:54.020500] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8740) on tqpair=0x1554750 00:14:53.681 [2024-11-26 20:35:54.020507] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:14:53.682 [2024-11-26 20:35:54.020515] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:14:53.682 [2024-11-26 20:35:54.020523] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.020527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.020531] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1554750) 00:14:53.682 [2024-11-26 20:35:54.020540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.682 [2024-11-26 20:35:54.020559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8740, cid 0, qid 0 00:14:53.682 [2024-11-26 20:35:54.020614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.682 [2024-11-26 20:35:54.020621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.682 [2024-11-26 20:35:54.020625] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.020629] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8740) on tqpair=0x1554750 00:14:53.682 [2024-11-26 20:35:54.020636] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:14:53.682 [2024-11-26 20:35:54.020645] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:14:53.682 [2024-11-26 20:35:54.020652] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.020657] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.020661] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1554750) 00:14:53.682 [2024-11-26 20:35:54.020668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.682 [2024-11-26 20:35:54.020687] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8740, cid 0, qid 0 00:14:53.682 [2024-11-26 20:35:54.020734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.682 [2024-11-26 20:35:54.020741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.682 [2024-11-26 20:35:54.020745] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.020749] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8740) on tqpair=0x1554750 00:14:53.682 [2024-11-26 20:35:54.020755] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:53.682 [2024-11-26 20:35:54.020766] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.020771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.020775] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1554750) 00:14:53.682 [2024-11-26 20:35:54.020782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.682 [2024-11-26 20:35:54.020800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8740, cid 0, qid 0 00:14:53.682 [2024-11-26 20:35:54.020849] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.682 [2024-11-26 20:35:54.020856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.682 [2024-11-26 20:35:54.020859] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.020864] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8740) on tqpair=0x1554750 00:14:53.682 [2024-11-26 20:35:54.020869] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:14:53.682 [2024-11-26 20:35:54.020874] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:14:53.682 [2024-11-26 20:35:54.020883] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:53.682 [2024-11-26 20:35:54.020994] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:14:53.682 [2024-11-26 20:35:54.021000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:53.682 [2024-11-26 20:35:54.021010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.021014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.021018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1554750) 00:14:53.682 [2024-11-26 20:35:54.021026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.682 [2024-11-26 20:35:54.021046] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8740, cid 0, qid 0 00:14:53.682 [2024-11-26 20:35:54.021093] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.682 [2024-11-26 20:35:54.021100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.682 [2024-11-26 20:35:54.021104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.021108] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8740) on tqpair=0x1554750 00:14:53.682 [2024-11-26 20:35:54.021114] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:53.682 [2024-11-26 20:35:54.021124] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.021129] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.021133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1554750) 00:14:53.682 [2024-11-26 20:35:54.021141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.682 [2024-11-26 20:35:54.021158] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8740, cid 0, qid 0 00:14:53.682 [2024-11-26 20:35:54.021204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.682 [2024-11-26 20:35:54.021211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.682 [2024-11-26 20:35:54.021215] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.021234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8740) on tqpair=0x1554750 00:14:53.682 [2024-11-26 20:35:54.021241] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:53.682 [2024-11-26 20:35:54.021246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:14:53.682 [2024-11-26 20:35:54.021255] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:14:53.682 [2024-11-26 20:35:54.021266] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:14:53.682 [2024-11-26 20:35:54.021277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.021282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1554750) 00:14:53.682 [2024-11-26 20:35:54.021290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.682 [2024-11-26 20:35:54.021311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8740, cid 0, qid 0 00:14:53.682 [2024-11-26 20:35:54.021404] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.682 [2024-11-26 20:35:54.021412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.682 [2024-11-26 20:35:54.021416] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.021420] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1554750): datao=0, datal=4096, cccid=0 00:14:53.682 [2024-11-26 20:35:54.021425] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15b8740) on tqpair(0x1554750): expected_datao=0, payload_size=4096 00:14:53.682 [2024-11-26 20:35:54.021430] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.021438] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.021443] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.021452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.682 [2024-11-26 20:35:54.021458] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.682 [2024-11-26 20:35:54.021462] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.021467] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8740) on tqpair=0x1554750 00:14:53.682 [2024-11-26 20:35:54.021476] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:14:53.682 [2024-11-26 20:35:54.021482] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:14:53.682 [2024-11-26 20:35:54.021486] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:14:53.682 [2024-11-26 20:35:54.021497] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:14:53.682 [2024-11-26 20:35:54.021503] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:14:53.682 [2024-11-26 20:35:54.021508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:14:53.682 [2024-11-26 20:35:54.021517] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:14:53.682 [2024-11-26 20:35:54.021526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.021530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.021534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1554750) 00:14:53.682 [2024-11-26 20:35:54.021542] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:53.682 [2024-11-26 20:35:54.021563] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8740, cid 0, qid 0 00:14:53.682 [2024-11-26 20:35:54.021619] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.682 [2024-11-26 20:35:54.021626] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.682 [2024-11-26 20:35:54.021630] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.021635] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8740) on tqpair=0x1554750 00:14:53.682 [2024-11-26 20:35:54.021643] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.021648] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.021651] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1554750) 00:14:53.682 [2024-11-26 20:35:54.021658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.682 [2024-11-26 20:35:54.021665] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.021669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.682 [2024-11-26 20:35:54.021673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1554750) 00:14:53.683 [2024-11-26 20:35:54.021679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.683 [2024-11-26 20:35:54.021686] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.021690] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.021694] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1554750) 00:14:53.683 [2024-11-26 20:35:54.021700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.683 [2024-11-26 20:35:54.021706] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.021710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.021714] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1554750) 00:14:53.683 [2024-11-26 20:35:54.021720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.683 [2024-11-26 20:35:54.021725] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:53.683 [2024-11-26 20:35:54.021734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:53.683 [2024-11-26 20:35:54.021741] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.021751] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1554750) 00:14:53.683 [2024-11-26 20:35:54.021758] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.683 [2024-11-26 20:35:54.021786] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8740, cid 0, qid 0 00:14:53.683 [2024-11-26 20:35:54.021794] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b88c0, cid 1, qid 0 00:14:53.683 [2024-11-26 20:35:54.021799] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8a40, cid 2, qid 0 00:14:53.683 [2024-11-26 20:35:54.021804] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8bc0, cid 3, qid 0 00:14:53.683 [2024-11-26 20:35:54.021809] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8d40, cid 4, qid 0 00:14:53.683 [2024-11-26 20:35:54.021912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.683 [2024-11-26 20:35:54.021919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.683 [2024-11-26 20:35:54.021923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.021927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8d40) on tqpair=0x1554750 00:14:53.683 [2024-11-26 20:35:54.021933] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:14:53.683 [2024-11-26 20:35:54.021939] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:14:53.683 [2024-11-26 20:35:54.021951] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.021956] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1554750) 00:14:53.683 [2024-11-26 20:35:54.021963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.683 [2024-11-26 20:35:54.021981] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8d40, cid 4, qid 0 00:14:53.683 [2024-11-26 20:35:54.022047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.683 [2024-11-26 20:35:54.022054] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.683 [2024-11-26 20:35:54.022057] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.022061] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1554750): datao=0, datal=4096, cccid=4 00:14:53.683 [2024-11-26 20:35:54.022066] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15b8d40) on tqpair(0x1554750): expected_datao=0, payload_size=4096 00:14:53.683 [2024-11-26 20:35:54.022071] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.022079] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.022083] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.022091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.683 [2024-11-26 20:35:54.022097] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.683 [2024-11-26 20:35:54.022101] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.022105] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8d40) on tqpair=0x1554750 00:14:53.683 [2024-11-26 20:35:54.022119] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:14:53.683 [2024-11-26 20:35:54.022146] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.022151] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1554750) 00:14:53.683 [2024-11-26 20:35:54.022159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.683 [2024-11-26 20:35:54.022167] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.022171] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.022175] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1554750) 00:14:53.683 [2024-11-26 20:35:54.022181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.683 [2024-11-26 20:35:54.022206] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8d40, cid 4, qid 0 00:14:53.683 [2024-11-26 20:35:54.022214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8ec0, cid 5, qid 0 00:14:53.683 [2024-11-26 20:35:54.022343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.683 [2024-11-26 20:35:54.022352] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.683 [2024-11-26 20:35:54.022355] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.022359] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1554750): datao=0, datal=1024, cccid=4 00:14:53.683 [2024-11-26 20:35:54.022364] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15b8d40) on tqpair(0x1554750): expected_datao=0, payload_size=1024 00:14:53.683 [2024-11-26 20:35:54.022369] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.022376] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.022380] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.022387] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.683 [2024-11-26 20:35:54.022393] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.683 [2024-11-26 20:35:54.022396] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.022401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8ec0) on tqpair=0x1554750 00:14:53.683 [2024-11-26 20:35:54.022419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.683 [2024-11-26 20:35:54.022427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.683 [2024-11-26 20:35:54.022430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.022435] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8d40) on tqpair=0x1554750 00:14:53.683 [2024-11-26 20:35:54.022448] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.022453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1554750) 00:14:53.683 [2024-11-26 20:35:54.022461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.683 [2024-11-26 20:35:54.022485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8d40, cid 4, qid 0 00:14:53.683 [2024-11-26 20:35:54.022556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.683 [2024-11-26 20:35:54.022563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.683 [2024-11-26 20:35:54.022566] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.022570] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1554750): datao=0, datal=3072, cccid=4 00:14:53.683 [2024-11-26 20:35:54.022575] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15b8d40) on tqpair(0x1554750): expected_datao=0, payload_size=3072 00:14:53.683 [2024-11-26 20:35:54.022580] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.022587] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.022591] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.022600] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.683 [2024-11-26 20:35:54.022606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.683 [2024-11-26 20:35:54.022610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.022614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8d40) on tqpair=0x1554750 00:14:53.683 [2024-11-26 20:35:54.022624] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.022629] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1554750) 00:14:53.683 [2024-11-26 20:35:54.022637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.683 [2024-11-26 20:35:54.022660] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8d40, cid 4, qid 0 00:14:53.683 [2024-11-26 20:35:54.022723] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.683 [2024-11-26 20:35:54.022730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.683 [2024-11-26 20:35:54.022734] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.022738] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1554750): datao=0, datal=8, cccid=4 00:14:53.683 [2024-11-26 20:35:54.022743] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15b8d40) on tqpair(0x1554750): expected_datao=0, payload_size=8 00:14:53.683 [2024-11-26 20:35:54.022747] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.022754] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.683 [2024-11-26 20:35:54.022758] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.683 ===================================================== 00:14:53.683 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:53.683 ===================================================== 00:14:53.683 Controller Capabilities/Features 00:14:53.683 ================================ 00:14:53.683 Vendor ID: 0000 00:14:53.683 Subsystem Vendor ID: 0000 00:14:53.683 Serial Number: .................... 00:14:53.683 Model Number: ........................................ 00:14:53.683 Firmware Version: 25.01 00:14:53.683 Recommended Arb Burst: 0 00:14:53.683 IEEE OUI Identifier: 00 00 00 00:14:53.683 Multi-path I/O 00:14:53.684 May have multiple subsystem ports: No 00:14:53.684 May have multiple controllers: No 00:14:53.684 Associated with SR-IOV VF: No 00:14:53.684 Max Data Transfer Size: 131072 00:14:53.684 Max Number of Namespaces: 0 00:14:53.684 Max Number of I/O Queues: 1024 00:14:53.684 NVMe Specification Version (VS): 1.3 00:14:53.684 NVMe Specification Version (Identify): 1.3 00:14:53.684 Maximum Queue Entries: 128 00:14:53.684 Contiguous Queues Required: Yes 00:14:53.684 Arbitration Mechanisms Supported 00:14:53.684 Weighted Round Robin: Not Supported 00:14:53.684 Vendor Specific: Not Supported 00:14:53.684 Reset Timeout: 15000 ms 00:14:53.684 Doorbell Stride: 4 bytes 00:14:53.684 NVM Subsystem Reset: Not Supported 00:14:53.684 Command Sets Supported 00:14:53.684 NVM Command Set: Supported 00:14:53.684 Boot Partition: Not Supported 00:14:53.684 Memory Page Size Minimum: 4096 bytes 00:14:53.684 Memory Page Size Maximum: 4096 bytes 00:14:53.684 Persistent Memory Region: Not Supported 00:14:53.684 Optional Asynchronous Events Supported 00:14:53.684 Namespace Attribute Notices: Not Supported 00:14:53.684 Firmware Activation Notices: Not Supported 00:14:53.684 ANA Change Notices: Not Supported 00:14:53.684 PLE Aggregate Log Change Notices: Not Supported 00:14:53.684 LBA Status Info Alert Notices: Not Supported 00:14:53.684 EGE Aggregate Log Change Notices: Not Supported 00:14:53.684 Normal NVM Subsystem Shutdown event: Not Supported 00:14:53.684 Zone Descriptor Change Notices: Not Supported 00:14:53.684 Discovery Log Change Notices: Supported 00:14:53.684 Controller Attributes 00:14:53.684 128-bit Host Identifier: Not Supported 00:14:53.684 Non-Operational Permissive Mode: Not Supported 00:14:53.684 NVM Sets: Not Supported 00:14:53.684 Read Recovery Levels: Not Supported 00:14:53.684 Endurance Groups: Not Supported 00:14:53.684 Predictable Latency Mode: Not Supported 00:14:53.684 Traffic Based Keep ALive: Not Supported 00:14:53.684 Namespace Granularity: Not Supported 00:14:53.684 SQ Associations: Not Supported 00:14:53.684 UUID List: Not Supported 00:14:53.684 Multi-Domain Subsystem: Not Supported 00:14:53.684 Fixed Capacity Management: Not Supported 00:14:53.684 Variable Capacity Management: Not Supported 00:14:53.684 Delete Endurance Group: Not Supported 00:14:53.684 Delete NVM Set: Not Supported 00:14:53.684 Extended LBA Formats Supported: Not Supported 00:14:53.684 Flexible Data Placement Supported: Not Supported 00:14:53.684 00:14:53.684 Controller Memory Buffer Support 00:14:53.684 ================================ 00:14:53.684 Supported: No 00:14:53.684 00:14:53.684 Persistent Memory Region Support 00:14:53.684 ================================ 00:14:53.684 Supported: No 00:14:53.684 00:14:53.684 Admin Command Set Attributes 00:14:53.684 ============================ 00:14:53.684 Security Send/Receive: Not Supported 00:14:53.684 Format NVM: Not Supported 00:14:53.684 Firmware Activate/Download: Not Supported 00:14:53.684 Namespace Management: Not Supported 00:14:53.684 Device Self-Test: Not Supported 00:14:53.684 Directives: Not Supported 00:14:53.684 NVMe-MI: Not Supported 00:14:53.684 Virtualization Management: Not Supported 00:14:53.684 Doorbell Buffer Config: Not Supported 00:14:53.684 Get LBA Status Capability: Not Supported 00:14:53.684 Command & Feature Lockdown Capability: Not Supported 00:14:53.684 Abort Command Limit: 1 00:14:53.684 Async Event Request Limit: 4 00:14:53.684 Number of Firmware Slots: N/A 00:14:53.684 Firmware Slot 1 Read-Only: N/A 00:14:53.684 Firmware Activation Without Reset: N/A 00:14:53.684 Multiple Update Detection Support: N/A 00:14:53.684 Firmware Update Granularity: No Information Provided 00:14:53.684 Per-Namespace SMART Log: No 00:14:53.684 Asymmetric Namespace Access Log Page: Not Supported 00:14:53.684 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:53.684 Command Effects Log Page: Not Supported 00:14:53.684 Get Log Page Extended Data: Supported 00:14:53.684 Telemetry Log Pages: Not Supported 00:14:53.684 Persistent Event Log Pages: Not Supported 00:14:53.684 Supported Log Pages Log Page: May Support 00:14:53.684 Commands Supported & Effects Log Page: Not Supported 00:14:53.684 Feature Identifiers & Effects Log Page:May Support 00:14:53.684 NVMe-MI Commands & Effects Log Page: May Support 00:14:53.684 Data Area 4 for Telemetry Log: Not Supported 00:14:53.684 Error Log Page Entries Supported: 128 00:14:53.684 Keep Alive: Not Supported 00:14:53.684 00:14:53.684 NVM Command Set Attributes 00:14:53.684 ========================== 00:14:53.684 Submission Queue Entry Size 00:14:53.684 Max: 1 00:14:53.684 Min: 1 00:14:53.684 Completion Queue Entry Size 00:14:53.684 Max: 1 00:14:53.684 Min: 1 00:14:53.684 Number of Namespaces: 0 00:14:53.684 Compare Command: Not Supported 00:14:53.684 Write Uncorrectable Command: Not Supported 00:14:53.684 Dataset Management Command: Not Supported 00:14:53.684 Write Zeroes Command: Not Supported 00:14:53.684 Set Features Save Field: Not Supported 00:14:53.684 Reservations: Not Supported 00:14:53.684 Timestamp: Not Supported 00:14:53.684 Copy: Not Supported 00:14:53.684 Volatile Write Cache: Not Present 00:14:53.684 Atomic Write Unit (Normal): 1 00:14:53.684 Atomic Write Unit (PFail): 1 00:14:53.684 Atomic Compare & Write Unit: 1 00:14:53.684 Fused Compare & Write: Supported 00:14:53.684 Scatter-Gather List 00:14:53.684 SGL Command Set: Supported 00:14:53.684 SGL Keyed: Supported 00:14:53.684 SGL Bit Bucket Descriptor: Not Supported 00:14:53.684 SGL Metadata Pointer: Not Supported 00:14:53.684 Oversized SGL: Not Supported 00:14:53.684 SGL Metadata Address: Not Supported 00:14:53.684 SGL Offset: Supported 00:14:53.684 Transport SGL Data Block: Not Supported 00:14:53.684 Replay Protected Memory Block: Not Supported 00:14:53.684 00:14:53.684 Firmware Slot Information 00:14:53.684 ========================= 00:14:53.684 Active slot: 0 00:14:53.684 00:14:53.684 00:14:53.684 Error Log 00:14:53.684 ========= 00:14:53.684 00:14:53.684 Active Namespaces 00:14:53.684 ================= 00:14:53.684 Discovery Log Page 00:14:53.684 ================== 00:14:53.684 Generation Counter: 2 00:14:53.684 Number of Records: 2 00:14:53.684 Record Format: 0 00:14:53.684 00:14:53.684 Discovery Log Entry 0 00:14:53.684 ---------------------- 00:14:53.684 Transport Type: 3 (TCP) 00:14:53.684 Address Family: 1 (IPv4) 00:14:53.684 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:53.684 Entry Flags: 00:14:53.684 Duplicate Returned Information: 1 00:14:53.684 Explicit Persistent Connection Support for Discovery: 1 00:14:53.684 Transport Requirements: 00:14:53.684 Secure Channel: Not Required 00:14:53.684 Port ID: 0 (0x0000) 00:14:53.684 Controller ID: 65535 (0xffff) 00:14:53.684 Admin Max SQ Size: 128 00:14:53.684 Transport Service Identifier: 4420 00:14:53.684 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:53.684 Transport Address: 10.0.0.3 00:14:53.684 Discovery Log Entry 1 00:14:53.684 ---------------------- 00:14:53.684 Transport Type: 3 (TCP) 00:14:53.684 Address Family: 1 (IPv4) 00:14:53.684 Subsystem Type: 2 (NVM Subsystem) 00:14:53.684 Entry Flags: 00:14:53.684 Duplicate Returned Information: 0 00:14:53.684 Explicit Persistent Connection Support for Discovery: 0 00:14:53.684 Transport Requirements: 00:14:53.684 Secure Channel: Not Required 00:14:53.684 Port ID: 0 (0x0000) 00:14:53.684 Controller ID: 65535 (0xffff) 00:14:53.684 Admin Max SQ Size: 128 00:14:53.684 Transport Service Identifier: 4420 00:14:53.684 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:53.684 Transport Address: 10.0.0.3 [2024-11-26 20:35:54.022773] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.684 [2024-11-26 20:35:54.022780] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.684 [2024-11-26 20:35:54.022784] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.684 [2024-11-26 20:35:54.022788] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8d40) on tqpair=0x1554750 00:14:53.684 [2024-11-26 20:35:54.022878] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:14:53.684 [2024-11-26 20:35:54.022891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8740) on tqpair=0x1554750 00:14:53.684 [2024-11-26 20:35:54.022898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.684 [2024-11-26 20:35:54.022904] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b88c0) on tqpair=0x1554750 00:14:53.684 [2024-11-26 20:35:54.022909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.684 [2024-11-26 20:35:54.022914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8a40) on tqpair=0x1554750 00:14:53.684 [2024-11-26 20:35:54.022919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.685 [2024-11-26 20:35:54.022925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8bc0) on tqpair=0x1554750 00:14:53.685 [2024-11-26 20:35:54.022929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.685 [2024-11-26 20:35:54.022942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.022947] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.022952] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1554750) 00:14:53.685 [2024-11-26 20:35:54.022960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.685 [2024-11-26 20:35:54.022982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8bc0, cid 3, qid 0 00:14:53.685 [2024-11-26 20:35:54.023033] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.685 [2024-11-26 20:35:54.023040] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.685 [2024-11-26 20:35:54.023044] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023048] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8bc0) on tqpair=0x1554750 00:14:53.685 [2024-11-26 20:35:54.023056] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023065] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1554750) 00:14:53.685 [2024-11-26 20:35:54.023073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.685 [2024-11-26 20:35:54.023094] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8bc0, cid 3, qid 0 00:14:53.685 [2024-11-26 20:35:54.023162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.685 [2024-11-26 20:35:54.023175] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.685 [2024-11-26 20:35:54.023180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8bc0) on tqpair=0x1554750 00:14:53.685 [2024-11-26 20:35:54.023190] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:14:53.685 [2024-11-26 20:35:54.023195] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:14:53.685 [2024-11-26 20:35:54.023206] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023215] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1554750) 00:14:53.685 [2024-11-26 20:35:54.023235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.685 [2024-11-26 20:35:54.023256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8bc0, cid 3, qid 0 00:14:53.685 [2024-11-26 20:35:54.023308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.685 [2024-11-26 20:35:54.023315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.685 [2024-11-26 20:35:54.023319] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023324] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8bc0) on tqpair=0x1554750 00:14:53.685 [2024-11-26 20:35:54.023335] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1554750) 00:14:53.685 [2024-11-26 20:35:54.023351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.685 [2024-11-26 20:35:54.023369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8bc0, cid 3, qid 0 00:14:53.685 [2024-11-26 20:35:54.023416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.685 [2024-11-26 20:35:54.023423] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.685 [2024-11-26 20:35:54.023427] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023431] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8bc0) on tqpair=0x1554750 00:14:53.685 [2024-11-26 20:35:54.023441] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023450] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1554750) 00:14:53.685 [2024-11-26 20:35:54.023457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.685 [2024-11-26 20:35:54.023474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8bc0, cid 3, qid 0 00:14:53.685 [2024-11-26 20:35:54.023524] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.685 [2024-11-26 20:35:54.023531] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.685 [2024-11-26 20:35:54.023535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8bc0) on tqpair=0x1554750 00:14:53.685 [2024-11-26 20:35:54.023550] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023554] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023558] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1554750) 00:14:53.685 [2024-11-26 20:35:54.023566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.685 [2024-11-26 20:35:54.023583] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8bc0, cid 3, qid 0 00:14:53.685 [2024-11-26 20:35:54.023627] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.685 [2024-11-26 20:35:54.023634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.685 [2024-11-26 20:35:54.023638] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8bc0) on tqpair=0x1554750 00:14:53.685 [2024-11-26 20:35:54.023652] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023657] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023661] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1554750) 00:14:53.685 [2024-11-26 20:35:54.023678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.685 [2024-11-26 20:35:54.023697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8bc0, cid 3, qid 0 00:14:53.685 [2024-11-26 20:35:54.023748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.685 [2024-11-26 20:35:54.023755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.685 [2024-11-26 20:35:54.023759] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023763] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8bc0) on tqpair=0x1554750 00:14:53.685 [2024-11-26 20:35:54.023774] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023779] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023783] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1554750) 00:14:53.685 [2024-11-26 20:35:54.023790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.685 [2024-11-26 20:35:54.023807] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8bc0, cid 3, qid 0 00:14:53.685 [2024-11-26 20:35:54.023857] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.685 [2024-11-26 20:35:54.023864] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.685 [2024-11-26 20:35:54.023867] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023872] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8bc0) on tqpair=0x1554750 00:14:53.685 [2024-11-26 20:35:54.023882] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023887] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023891] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1554750) 00:14:53.685 [2024-11-26 20:35:54.023898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.685 [2024-11-26 20:35:54.023915] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8bc0, cid 3, qid 0 00:14:53.685 [2024-11-26 20:35:54.023966] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.685 [2024-11-26 20:35:54.023973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.685 [2024-11-26 20:35:54.023976] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8bc0) on tqpair=0x1554750 00:14:53.685 [2024-11-26 20:35:54.023991] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.023996] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.685 [2024-11-26 20:35:54.024000] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1554750) 00:14:53.686 [2024-11-26 20:35:54.024007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.686 [2024-11-26 20:35:54.024024] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8bc0, cid 3, qid 0 00:14:53.686 [2024-11-26 20:35:54.024071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.686 [2024-11-26 20:35:54.024078] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.686 [2024-11-26 20:35:54.024082] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.686 [2024-11-26 20:35:54.024086] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8bc0) on tqpair=0x1554750 00:14:53.686 [2024-11-26 20:35:54.024097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.686 [2024-11-26 20:35:54.024101] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.686 [2024-11-26 20:35:54.024105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1554750) 00:14:53.686 [2024-11-26 20:35:54.024113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.686 [2024-11-26 20:35:54.024129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8bc0, cid 3, qid 0 00:14:53.686 [2024-11-26 20:35:54.024179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.686 [2024-11-26 20:35:54.024186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.686 [2024-11-26 20:35:54.024190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.686 [2024-11-26 20:35:54.024194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8bc0) on tqpair=0x1554750 00:14:53.686 [2024-11-26 20:35:54.024205] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.686 [2024-11-26 20:35:54.024210] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.686 [2024-11-26 20:35:54.024213] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1554750) 00:14:53.686 [2024-11-26 20:35:54.028233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.686 [2024-11-26 20:35:54.028275] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b8bc0, cid 3, qid 0 00:14:53.686 [2024-11-26 20:35:54.028326] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.686 [2024-11-26 20:35:54.028333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.686 [2024-11-26 20:35:54.028337] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.686 [2024-11-26 20:35:54.028342] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b8bc0) on tqpair=0x1554750 00:14:53.686 [2024-11-26 20:35:54.028351] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:14:53.968 00:14:53.968 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:53.968 [2024-11-26 20:35:54.074397] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:14:53.968 [2024-11-26 20:35:54.074593] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74324 ] 00:14:53.968 [2024-11-26 20:35:54.236946] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:14:53.968 [2024-11-26 20:35:54.237016] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:53.968 [2024-11-26 20:35:54.237024] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:53.968 [2024-11-26 20:35:54.237040] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:53.968 [2024-11-26 20:35:54.237055] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:53.968 [2024-11-26 20:35:54.237548] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:14:53.968 [2024-11-26 20:35:54.237611] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1df4750 0 00:14:53.968 [2024-11-26 20:35:54.248242] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:53.968 [2024-11-26 20:35:54.248267] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:53.968 [2024-11-26 20:35:54.248273] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:53.968 [2024-11-26 20:35:54.248277] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:53.968 [2024-11-26 20:35:54.248314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.968 [2024-11-26 20:35:54.248322] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.968 [2024-11-26 20:35:54.248326] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df4750) 00:14:53.968 [2024-11-26 20:35:54.248341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:53.968 [2024-11-26 20:35:54.248374] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58740, cid 0, qid 0 00:14:53.968 [2024-11-26 20:35:54.256239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.968 [2024-11-26 20:35:54.256262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.968 [2024-11-26 20:35:54.256267] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.968 [2024-11-26 20:35:54.256272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58740) on tqpair=0x1df4750 00:14:53.968 [2024-11-26 20:35:54.256288] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:53.968 [2024-11-26 20:35:54.256298] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:14:53.968 [2024-11-26 20:35:54.256305] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:14:53.968 [2024-11-26 20:35:54.256325] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.968 [2024-11-26 20:35:54.256331] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.968 [2024-11-26 20:35:54.256335] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df4750) 00:14:53.968 [2024-11-26 20:35:54.256345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.968 [2024-11-26 20:35:54.256373] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58740, cid 0, qid 0 00:14:53.969 [2024-11-26 20:35:54.256431] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.969 [2024-11-26 20:35:54.256439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.969 [2024-11-26 20:35:54.256443] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.256447] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58740) on tqpair=0x1df4750 00:14:53.969 [2024-11-26 20:35:54.256454] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:14:53.969 [2024-11-26 20:35:54.256462] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:14:53.969 [2024-11-26 20:35:54.256471] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.256475] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.256479] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df4750) 00:14:53.969 [2024-11-26 20:35:54.256487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.969 [2024-11-26 20:35:54.256506] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58740, cid 0, qid 0 00:14:53.969 [2024-11-26 20:35:54.256555] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.969 [2024-11-26 20:35:54.256562] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.969 [2024-11-26 20:35:54.256566] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.256570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58740) on tqpair=0x1df4750 00:14:53.969 [2024-11-26 20:35:54.256577] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:14:53.969 [2024-11-26 20:35:54.256585] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:14:53.969 [2024-11-26 20:35:54.256593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.256598] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.256602] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df4750) 00:14:53.969 [2024-11-26 20:35:54.256609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.969 [2024-11-26 20:35:54.256627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58740, cid 0, qid 0 00:14:53.969 [2024-11-26 20:35:54.256675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.969 [2024-11-26 20:35:54.256682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.969 [2024-11-26 20:35:54.256686] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.256690] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58740) on tqpair=0x1df4750 00:14:53.969 [2024-11-26 20:35:54.256696] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:53.969 [2024-11-26 20:35:54.256707] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.256712] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.256716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df4750) 00:14:53.969 [2024-11-26 20:35:54.256724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.969 [2024-11-26 20:35:54.256741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58740, cid 0, qid 0 00:14:53.969 [2024-11-26 20:35:54.256786] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.969 [2024-11-26 20:35:54.256793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.969 [2024-11-26 20:35:54.256796] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.256801] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58740) on tqpair=0x1df4750 00:14:53.969 [2024-11-26 20:35:54.256806] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:14:53.969 [2024-11-26 20:35:54.256811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:14:53.969 [2024-11-26 20:35:54.256820] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:53.969 [2024-11-26 20:35:54.256931] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:14:53.969 [2024-11-26 20:35:54.256947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:53.969 [2024-11-26 20:35:54.256957] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.256962] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.256966] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df4750) 00:14:53.969 [2024-11-26 20:35:54.256973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.969 [2024-11-26 20:35:54.256995] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58740, cid 0, qid 0 00:14:53.969 [2024-11-26 20:35:54.257047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.969 [2024-11-26 20:35:54.257054] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.969 [2024-11-26 20:35:54.257058] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.257062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58740) on tqpair=0x1df4750 00:14:53.969 [2024-11-26 20:35:54.257068] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:53.969 [2024-11-26 20:35:54.257079] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.257084] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.257088] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df4750) 00:14:53.969 [2024-11-26 20:35:54.257096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.969 [2024-11-26 20:35:54.257114] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58740, cid 0, qid 0 00:14:53.969 [2024-11-26 20:35:54.257165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.969 [2024-11-26 20:35:54.257172] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.969 [2024-11-26 20:35:54.257176] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.257180] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58740) on tqpair=0x1df4750 00:14:53.969 [2024-11-26 20:35:54.257185] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:53.969 [2024-11-26 20:35:54.257190] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:14:53.969 [2024-11-26 20:35:54.257199] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:14:53.969 [2024-11-26 20:35:54.257210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:14:53.969 [2024-11-26 20:35:54.257234] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.257241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df4750) 00:14:53.969 [2024-11-26 20:35:54.257249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.969 [2024-11-26 20:35:54.257270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58740, cid 0, qid 0 00:14:53.969 [2024-11-26 20:35:54.257391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.969 [2024-11-26 20:35:54.257403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.969 [2024-11-26 20:35:54.257407] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.257411] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df4750): datao=0, datal=4096, cccid=0 00:14:53.969 [2024-11-26 20:35:54.257417] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e58740) on tqpair(0x1df4750): expected_datao=0, payload_size=4096 00:14:53.969 [2024-11-26 20:35:54.257422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.257431] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.257436] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.257445] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.969 [2024-11-26 20:35:54.257452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.969 [2024-11-26 20:35:54.257456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.257460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58740) on tqpair=0x1df4750 00:14:53.969 [2024-11-26 20:35:54.257470] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:14:53.969 [2024-11-26 20:35:54.257476] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:14:53.969 [2024-11-26 20:35:54.257481] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:14:53.969 [2024-11-26 20:35:54.257491] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:14:53.969 [2024-11-26 20:35:54.257496] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:14:53.969 [2024-11-26 20:35:54.257502] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:14:53.969 [2024-11-26 20:35:54.257513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:14:53.969 [2024-11-26 20:35:54.257521] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.257525] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.257530] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df4750) 00:14:53.969 [2024-11-26 20:35:54.257538] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:53.969 [2024-11-26 20:35:54.257559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58740, cid 0, qid 0 00:14:53.969 [2024-11-26 20:35:54.257614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.969 [2024-11-26 20:35:54.257621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.969 [2024-11-26 20:35:54.257625] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.257629] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58740) on tqpair=0x1df4750 00:14:53.969 [2024-11-26 20:35:54.257638] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.257642] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.969 [2024-11-26 20:35:54.257646] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df4750) 00:14:53.970 [2024-11-26 20:35:54.257653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.970 [2024-11-26 20:35:54.257660] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.257664] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.257668] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1df4750) 00:14:53.970 [2024-11-26 20:35:54.257674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.970 [2024-11-26 20:35:54.257681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.257685] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.257689] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1df4750) 00:14:53.970 [2024-11-26 20:35:54.257695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.970 [2024-11-26 20:35:54.257701] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.257705] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.257709] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.970 [2024-11-26 20:35:54.257715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.970 [2024-11-26 20:35:54.257721] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:53.970 [2024-11-26 20:35:54.257730] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:53.970 [2024-11-26 20:35:54.257737] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.257741] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df4750) 00:14:53.970 [2024-11-26 20:35:54.257749] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.970 [2024-11-26 20:35:54.257774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58740, cid 0, qid 0 00:14:53.970 [2024-11-26 20:35:54.257782] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e588c0, cid 1, qid 0 00:14:53.970 [2024-11-26 20:35:54.257787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58a40, cid 2, qid 0 00:14:53.970 [2024-11-26 20:35:54.257792] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.970 [2024-11-26 20:35:54.257797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58d40, cid 4, qid 0 00:14:53.970 [2024-11-26 20:35:54.257886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.970 [2024-11-26 20:35:54.257893] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.970 [2024-11-26 20:35:54.257897] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.257901] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58d40) on tqpair=0x1df4750 00:14:53.970 [2024-11-26 20:35:54.257907] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:14:53.970 [2024-11-26 20:35:54.257913] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:53.970 [2024-11-26 20:35:54.257922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:14:53.970 [2024-11-26 20:35:54.257929] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:53.970 [2024-11-26 20:35:54.257936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.257941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.257945] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df4750) 00:14:53.970 [2024-11-26 20:35:54.257952] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:53.970 [2024-11-26 20:35:54.257970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58d40, cid 4, qid 0 00:14:53.970 [2024-11-26 20:35:54.258025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.970 [2024-11-26 20:35:54.258032] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.970 [2024-11-26 20:35:54.258035] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.258040] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58d40) on tqpair=0x1df4750 00:14:53.970 [2024-11-26 20:35:54.258107] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:14:53.970 [2024-11-26 20:35:54.258119] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:53.970 [2024-11-26 20:35:54.258128] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.258132] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df4750) 00:14:53.970 [2024-11-26 20:35:54.258140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.970 [2024-11-26 20:35:54.258160] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58d40, cid 4, qid 0 00:14:53.970 [2024-11-26 20:35:54.258241] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.970 [2024-11-26 20:35:54.258250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.970 [2024-11-26 20:35:54.258254] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.258258] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df4750): datao=0, datal=4096, cccid=4 00:14:53.970 [2024-11-26 20:35:54.258263] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e58d40) on tqpair(0x1df4750): expected_datao=0, payload_size=4096 00:14:53.970 [2024-11-26 20:35:54.258268] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.258276] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.258280] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.258289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.970 [2024-11-26 20:35:54.258295] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.970 [2024-11-26 20:35:54.258299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.258303] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58d40) on tqpair=0x1df4750 00:14:53.970 [2024-11-26 20:35:54.258315] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:14:53.970 [2024-11-26 20:35:54.258329] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:14:53.970 [2024-11-26 20:35:54.258341] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:14:53.970 [2024-11-26 20:35:54.258349] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.258354] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df4750) 00:14:53.970 [2024-11-26 20:35:54.258361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.970 [2024-11-26 20:35:54.258383] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58d40, cid 4, qid 0 00:14:53.970 [2024-11-26 20:35:54.258466] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.970 [2024-11-26 20:35:54.258473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.970 [2024-11-26 20:35:54.258476] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.258480] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df4750): datao=0, datal=4096, cccid=4 00:14:53.970 [2024-11-26 20:35:54.258485] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e58d40) on tqpair(0x1df4750): expected_datao=0, payload_size=4096 00:14:53.970 [2024-11-26 20:35:54.258490] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.258498] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.258502] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.258510] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.970 [2024-11-26 20:35:54.258517] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.970 [2024-11-26 20:35:54.258521] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.258525] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58d40) on tqpair=0x1df4750 00:14:53.970 [2024-11-26 20:35:54.258545] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:53.970 [2024-11-26 20:35:54.258558] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:53.970 [2024-11-26 20:35:54.258567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.258572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df4750) 00:14:53.970 [2024-11-26 20:35:54.258580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.970 [2024-11-26 20:35:54.258600] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58d40, cid 4, qid 0 00:14:53.970 [2024-11-26 20:35:54.258664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.970 [2024-11-26 20:35:54.258672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.970 [2024-11-26 20:35:54.258675] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.258679] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df4750): datao=0, datal=4096, cccid=4 00:14:53.970 [2024-11-26 20:35:54.258685] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e58d40) on tqpair(0x1df4750): expected_datao=0, payload_size=4096 00:14:53.970 [2024-11-26 20:35:54.258689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.258697] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.258701] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.258710] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.970 [2024-11-26 20:35:54.258716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.970 [2024-11-26 20:35:54.258720] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.970 [2024-11-26 20:35:54.258724] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58d40) on tqpair=0x1df4750 00:14:53.970 [2024-11-26 20:35:54.258734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:53.970 [2024-11-26 20:35:54.258743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:14:53.970 [2024-11-26 20:35:54.258755] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:14:53.970 [2024-11-26 20:35:54.258762] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:53.971 [2024-11-26 20:35:54.258767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:53.971 [2024-11-26 20:35:54.258773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:14:53.971 [2024-11-26 20:35:54.258779] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:14:53.971 [2024-11-26 20:35:54.258784] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:14:53.971 [2024-11-26 20:35:54.258790] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:14:53.971 [2024-11-26 20:35:54.258807] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.258812] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df4750) 00:14:53.971 [2024-11-26 20:35:54.258819] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.971 [2024-11-26 20:35:54.258826] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.258831] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.258835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1df4750) 00:14:53.971 [2024-11-26 20:35:54.258841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.971 [2024-11-26 20:35:54.258867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58d40, cid 4, qid 0 00:14:53.971 [2024-11-26 20:35:54.258875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58ec0, cid 5, qid 0 00:14:53.971 [2024-11-26 20:35:54.258938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.971 [2024-11-26 20:35:54.258945] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.971 [2024-11-26 20:35:54.258948] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.258953] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58d40) on tqpair=0x1df4750 00:14:53.971 [2024-11-26 20:35:54.258960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.971 [2024-11-26 20:35:54.258966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.971 [2024-11-26 20:35:54.258970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.258974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58ec0) on tqpair=0x1df4750 00:14:53.971 [2024-11-26 20:35:54.258985] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.258990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1df4750) 00:14:53.971 [2024-11-26 20:35:54.258997] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.971 [2024-11-26 20:35:54.259014] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58ec0, cid 5, qid 0 00:14:53.971 [2024-11-26 20:35:54.259060] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.971 [2024-11-26 20:35:54.259067] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.971 [2024-11-26 20:35:54.259071] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259075] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58ec0) on tqpair=0x1df4750 00:14:53.971 [2024-11-26 20:35:54.259086] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259091] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1df4750) 00:14:53.971 [2024-11-26 20:35:54.259098] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.971 [2024-11-26 20:35:54.259114] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58ec0, cid 5, qid 0 00:14:53.971 [2024-11-26 20:35:54.259169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.971 [2024-11-26 20:35:54.259189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.971 [2024-11-26 20:35:54.259194] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58ec0) on tqpair=0x1df4750 00:14:53.971 [2024-11-26 20:35:54.259210] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259215] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1df4750) 00:14:53.971 [2024-11-26 20:35:54.259236] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.971 [2024-11-26 20:35:54.259258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58ec0, cid 5, qid 0 00:14:53.971 [2024-11-26 20:35:54.259306] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.971 [2024-11-26 20:35:54.259317] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.971 [2024-11-26 20:35:54.259322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58ec0) on tqpair=0x1df4750 00:14:53.971 [2024-11-26 20:35:54.259346] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259352] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1df4750) 00:14:53.971 [2024-11-26 20:35:54.259360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.971 [2024-11-26 20:35:54.259368] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259372] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df4750) 00:14:53.971 [2024-11-26 20:35:54.259379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.971 [2024-11-26 20:35:54.259387] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259391] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1df4750) 00:14:53.971 [2024-11-26 20:35:54.259398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.971 [2024-11-26 20:35:54.259407] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1df4750) 00:14:53.971 [2024-11-26 20:35:54.259418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.971 [2024-11-26 20:35:54.259439] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58ec0, cid 5, qid 0 00:14:53.971 [2024-11-26 20:35:54.259446] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58d40, cid 4, qid 0 00:14:53.971 [2024-11-26 20:35:54.259452] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e59040, cid 6, qid 0 00:14:53.971 [2024-11-26 20:35:54.259457] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e591c0, cid 7, qid 0 00:14:53.971 [2024-11-26 20:35:54.259604] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.971 [2024-11-26 20:35:54.259616] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.971 [2024-11-26 20:35:54.259620] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259625] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df4750): datao=0, datal=8192, cccid=5 00:14:53.971 [2024-11-26 20:35:54.259630] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e58ec0) on tqpair(0x1df4750): expected_datao=0, payload_size=8192 00:14:53.971 [2024-11-26 20:35:54.259635] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259655] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259660] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.971 [2024-11-26 20:35:54.259682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.971 [2024-11-26 20:35:54.259686] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259690] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df4750): datao=0, datal=512, cccid=4 00:14:53.971 [2024-11-26 20:35:54.259695] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e58d40) on tqpair(0x1df4750): expected_datao=0, payload_size=512 00:14:53.971 [2024-11-26 20:35:54.259700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259707] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259711] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.971 [2024-11-26 20:35:54.259723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.971 [2024-11-26 20:35:54.259726] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259730] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df4750): datao=0, datal=512, cccid=6 00:14:53.971 [2024-11-26 20:35:54.259735] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e59040) on tqpair(0x1df4750): expected_datao=0, payload_size=512 00:14:53.971 [2024-11-26 20:35:54.259740] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259747] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259751] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.971 [2024-11-26 20:35:54.259763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.971 [2024-11-26 20:35:54.259767] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259770] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df4750): datao=0, datal=4096, cccid=7 00:14:53.971 [2024-11-26 20:35:54.259775] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e591c0) on tqpair(0x1df4750): expected_datao=0, payload_size=4096 00:14:53.971 [2024-11-26 20:35:54.259780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259787] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259791] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259799] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.971 [2024-11-26 20:35:54.259805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.971 [2024-11-26 20:35:54.259809] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58ec0) on tqpair=0x1df4750 00:14:53.971 [2024-11-26 20:35:54.259831] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.971 [2024-11-26 20:35:54.259838] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.971 [2024-11-26 20:35:54.259841] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.971 [2024-11-26 20:35:54.259846] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58d40) on tqpair=0x1df4750 00:14:53.972 [2024-11-26 20:35:54.259859] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.972 [2024-11-26 20:35:54.259866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.972 [2024-11-26 20:35:54.259870] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.972 [2024-11-26 20:35:54.259874] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e59040) on tqpair=0x1df4750 00:14:53.972 [2024-11-26 20:35:54.259882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.972 ===================================================== 00:14:53.972 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:53.972 ===================================================== 00:14:53.972 Controller Capabilities/Features 00:14:53.972 ================================ 00:14:53.972 Vendor ID: 8086 00:14:53.972 Subsystem Vendor ID: 8086 00:14:53.972 Serial Number: SPDK00000000000001 00:14:53.972 Model Number: SPDK bdev Controller 00:14:53.972 Firmware Version: 25.01 00:14:53.972 Recommended Arb Burst: 6 00:14:53.972 IEEE OUI Identifier: e4 d2 5c 00:14:53.972 Multi-path I/O 00:14:53.972 May have multiple subsystem ports: Yes 00:14:53.972 May have multiple controllers: Yes 00:14:53.972 Associated with SR-IOV VF: No 00:14:53.972 Max Data Transfer Size: 131072 00:14:53.972 Max Number of Namespaces: 32 00:14:53.972 Max Number of I/O Queues: 127 00:14:53.972 NVMe Specification Version (VS): 1.3 00:14:53.972 NVMe Specification Version (Identify): 1.3 00:14:53.972 Maximum Queue Entries: 128 00:14:53.972 Contiguous Queues Required: Yes 00:14:53.972 Arbitration Mechanisms Supported 00:14:53.972 Weighted Round Robin: Not Supported 00:14:53.972 Vendor Specific: Not Supported 00:14:53.972 Reset Timeout: 15000 ms 00:14:53.972 Doorbell Stride: 4 bytes 00:14:53.972 NVM Subsystem Reset: Not Supported 00:14:53.972 Command Sets Supported 00:14:53.972 NVM Command Set: Supported 00:14:53.972 Boot Partition: Not Supported 00:14:53.972 Memory Page Size Minimum: 4096 bytes 00:14:53.972 Memory Page Size Maximum: 4096 bytes 00:14:53.972 Persistent Memory Region: Not Supported 00:14:53.972 Optional Asynchronous Events Supported 00:14:53.972 Namespace Attribute Notices: Supported 00:14:53.972 Firmware Activation Notices: Not Supported 00:14:53.972 ANA Change Notices: Not Supported 00:14:53.972 PLE Aggregate Log Change Notices: Not Supported 00:14:53.972 LBA Status Info Alert Notices: Not Supported 00:14:53.972 EGE Aggregate Log Change Notices: Not Supported 00:14:53.972 Normal NVM Subsystem Shutdown event: Not Supported 00:14:53.972 Zone Descriptor Change Notices: Not Supported 00:14:53.972 Discovery Log Change Notices: Not Supported 00:14:53.972 Controller Attributes 00:14:53.972 128-bit Host Identifier: Supported 00:14:53.972 Non-Operational Permissive Mode: Not Supported 00:14:53.972 NVM Sets: Not Supported 00:14:53.972 Read Recovery Levels: Not Supported 00:14:53.972 Endurance Groups: Not Supported 00:14:53.972 Predictable Latency Mode: Not Supported 00:14:53.972 Traffic Based Keep ALive: Not Supported 00:14:53.972 Namespace Granularity: Not Supported 00:14:53.972 SQ Associations: Not Supported 00:14:53.972 UUID List: Not Supported 00:14:53.972 Multi-Domain Subsystem: Not Supported 00:14:53.972 Fixed Capacity Management: Not Supported 00:14:53.972 Variable Capacity Management: Not Supported 00:14:53.972 Delete Endurance Group: Not Supported 00:14:53.972 Delete NVM Set: Not Supported 00:14:53.972 Extended LBA Formats Supported: Not Supported 00:14:53.972 Flexible Data Placement Supported: Not Supported 00:14:53.972 00:14:53.972 Controller Memory Buffer Support 00:14:53.972 ================================ 00:14:53.972 Supported: No 00:14:53.972 00:14:53.972 Persistent Memory Region Support 00:14:53.972 ================================ 00:14:53.972 Supported: No 00:14:53.972 00:14:53.972 Admin Command Set Attributes 00:14:53.972 ============================ 00:14:53.972 Security Send/Receive: Not Supported 00:14:53.972 Format NVM: Not Supported 00:14:53.972 Firmware Activate/Download: Not Supported 00:14:53.972 Namespace Management: Not Supported 00:14:53.972 Device Self-Test: Not Supported 00:14:53.972 Directives: Not Supported 00:14:53.972 NVMe-MI: Not Supported 00:14:53.972 Virtualization Management: Not Supported 00:14:53.972 Doorbell Buffer Config: Not Supported 00:14:53.972 Get LBA Status Capability: Not Supported 00:14:53.972 Command & Feature Lockdown Capability: Not Supported 00:14:53.972 Abort Command Limit: 4 00:14:53.972 Async Event Request Limit: 4 00:14:53.972 Number of Firmware Slots: N/A 00:14:53.972 Firmware Slot 1 Read-Only: N/A 00:14:53.972 Firmware Activation Without Reset: [2024-11-26 20:35:54.259888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.972 [2024-11-26 20:35:54.259892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.972 [2024-11-26 20:35:54.259896] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e591c0) on tqpair=0x1df4750 00:14:53.972 N/A 00:14:53.972 Multiple Update Detection Support: N/A 00:14:53.972 Firmware Update Granularity: No Information Provided 00:14:53.972 Per-Namespace SMART Log: No 00:14:53.972 Asymmetric Namespace Access Log Page: Not Supported 00:14:53.972 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:53.972 Command Effects Log Page: Supported 00:14:53.972 Get Log Page Extended Data: Supported 00:14:53.972 Telemetry Log Pages: Not Supported 00:14:53.972 Persistent Event Log Pages: Not Supported 00:14:53.972 Supported Log Pages Log Page: May Support 00:14:53.972 Commands Supported & Effects Log Page: Not Supported 00:14:53.972 Feature Identifiers & Effects Log Page:May Support 00:14:53.972 NVMe-MI Commands & Effects Log Page: May Support 00:14:53.972 Data Area 4 for Telemetry Log: Not Supported 00:14:53.972 Error Log Page Entries Supported: 128 00:14:53.972 Keep Alive: Supported 00:14:53.972 Keep Alive Granularity: 10000 ms 00:14:53.972 00:14:53.972 NVM Command Set Attributes 00:14:53.972 ========================== 00:14:53.972 Submission Queue Entry Size 00:14:53.972 Max: 64 00:14:53.972 Min: 64 00:14:53.972 Completion Queue Entry Size 00:14:53.972 Max: 16 00:14:53.972 Min: 16 00:14:53.972 Number of Namespaces: 32 00:14:53.972 Compare Command: Supported 00:14:53.972 Write Uncorrectable Command: Not Supported 00:14:53.972 Dataset Management Command: Supported 00:14:53.972 Write Zeroes Command: Supported 00:14:53.972 Set Features Save Field: Not Supported 00:14:53.972 Reservations: Supported 00:14:53.972 Timestamp: Not Supported 00:14:53.972 Copy: Supported 00:14:53.972 Volatile Write Cache: Present 00:14:53.972 Atomic Write Unit (Normal): 1 00:14:53.972 Atomic Write Unit (PFail): 1 00:14:53.972 Atomic Compare & Write Unit: 1 00:14:53.972 Fused Compare & Write: Supported 00:14:53.972 Scatter-Gather List 00:14:53.972 SGL Command Set: Supported 00:14:53.972 SGL Keyed: Supported 00:14:53.972 SGL Bit Bucket Descriptor: Not Supported 00:14:53.972 SGL Metadata Pointer: Not Supported 00:14:53.972 Oversized SGL: Not Supported 00:14:53.972 SGL Metadata Address: Not Supported 00:14:53.972 SGL Offset: Supported 00:14:53.972 Transport SGL Data Block: Not Supported 00:14:53.972 Replay Protected Memory Block: Not Supported 00:14:53.972 00:14:53.972 Firmware Slot Information 00:14:53.972 ========================= 00:14:53.972 Active slot: 1 00:14:53.972 Slot 1 Firmware Revision: 25.01 00:14:53.972 00:14:53.972 00:14:53.972 Commands Supported and Effects 00:14:53.972 ============================== 00:14:53.972 Admin Commands 00:14:53.972 -------------- 00:14:53.972 Get Log Page (02h): Supported 00:14:53.972 Identify (06h): Supported 00:14:53.972 Abort (08h): Supported 00:14:53.972 Set Features (09h): Supported 00:14:53.972 Get Features (0Ah): Supported 00:14:53.972 Asynchronous Event Request (0Ch): Supported 00:14:53.972 Keep Alive (18h): Supported 00:14:53.972 I/O Commands 00:14:53.972 ------------ 00:14:53.972 Flush (00h): Supported LBA-Change 00:14:53.972 Write (01h): Supported LBA-Change 00:14:53.972 Read (02h): Supported 00:14:53.972 Compare (05h): Supported 00:14:53.972 Write Zeroes (08h): Supported LBA-Change 00:14:53.972 Dataset Management (09h): Supported LBA-Change 00:14:53.972 Copy (19h): Supported LBA-Change 00:14:53.972 00:14:53.972 Error Log 00:14:53.972 ========= 00:14:53.972 00:14:53.972 Arbitration 00:14:53.972 =========== 00:14:53.972 Arbitration Burst: 1 00:14:53.972 00:14:53.972 Power Management 00:14:53.972 ================ 00:14:53.972 Number of Power States: 1 00:14:53.972 Current Power State: Power State #0 00:14:53.972 Power State #0: 00:14:53.972 Max Power: 0.00 W 00:14:53.972 Non-Operational State: Operational 00:14:53.972 Entry Latency: Not Reported 00:14:53.972 Exit Latency: Not Reported 00:14:53.972 Relative Read Throughput: 0 00:14:53.972 Relative Read Latency: 0 00:14:53.972 Relative Write Throughput: 0 00:14:53.973 Relative Write Latency: 0 00:14:53.973 Idle Power: Not Reported 00:14:53.973 Active Power: Not Reported 00:14:53.973 Non-Operational Permissive Mode: Not Supported 00:14:53.973 00:14:53.973 Health Information 00:14:53.973 ================== 00:14:53.973 Critical Warnings: 00:14:53.973 Available Spare Space: OK 00:14:53.973 Temperature: OK 00:14:53.973 Device Reliability: OK 00:14:53.973 Read Only: No 00:14:53.973 Volatile Memory Backup: OK 00:14:53.973 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:53.973 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:53.973 Available Spare: 0% 00:14:53.973 Available Spare Threshold: 0% 00:14:53.973 Life Percentage Used:[2024-11-26 20:35:54.260003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.973 [2024-11-26 20:35:54.260013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1df4750) 00:14:53.973 [2024-11-26 20:35:54.260021] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.973 [2024-11-26 20:35:54.260045] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e591c0, cid 7, qid 0 00:14:53.973 [2024-11-26 20:35:54.260097] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.973 [2024-11-26 20:35:54.260104] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.973 [2024-11-26 20:35:54.260108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.973 [2024-11-26 20:35:54.260112] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e591c0) on tqpair=0x1df4750 00:14:53.973 [2024-11-26 20:35:54.260152] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:14:53.973 [2024-11-26 20:35:54.260164] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58740) on tqpair=0x1df4750 00:14:53.973 [2024-11-26 20:35:54.260171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.973 [2024-11-26 20:35:54.260177] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e588c0) on tqpair=0x1df4750 00:14:53.973 [2024-11-26 20:35:54.260182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.973 [2024-11-26 20:35:54.260188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58a40) on tqpair=0x1df4750 00:14:53.973 [2024-11-26 20:35:54.260192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.973 [2024-11-26 20:35:54.260198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.973 [2024-11-26 20:35:54.260203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.973 [2024-11-26 20:35:54.260212] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.973 [2024-11-26 20:35:54.260217] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.973 [2024-11-26 20:35:54.264240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.973 [2024-11-26 20:35:54.264254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.973 [2024-11-26 20:35:54.264284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.973 [2024-11-26 20:35:54.264347] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.973 [2024-11-26 20:35:54.264355] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.973 [2024-11-26 20:35:54.264359] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.973 [2024-11-26 20:35:54.264363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.973 [2024-11-26 20:35:54.264373] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.973 [2024-11-26 20:35:54.264378] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.973 [2024-11-26 20:35:54.264382] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.973 [2024-11-26 20:35:54.264389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.973 [2024-11-26 20:35:54.264412] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.973 [2024-11-26 20:35:54.264481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.973 [2024-11-26 20:35:54.264488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.973 [2024-11-26 20:35:54.264492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.973 [2024-11-26 20:35:54.264496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.973 [2024-11-26 20:35:54.264511] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:14:53.973 [2024-11-26 20:35:54.264517] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:14:53.973 [2024-11-26 20:35:54.264528] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.973 [2024-11-26 20:35:54.264532] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.973 [2024-11-26 20:35:54.264536] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.973 [2024-11-26 20:35:54.264544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.973 [2024-11-26 20:35:54.264561] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.973 [2024-11-26 20:35:54.264645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.973 [2024-11-26 20:35:54.264652] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.973 [2024-11-26 20:35:54.264656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.973 [2024-11-26 20:35:54.264660] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.973 [2024-11-26 20:35:54.264672] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.973 [2024-11-26 20:35:54.264677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.973 [2024-11-26 20:35:54.264681] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.973 [2024-11-26 20:35:54.264689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.973 [2024-11-26 20:35:54.264706] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.973 [2024-11-26 20:35:54.264757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.973 [2024-11-26 20:35:54.264764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.973 [2024-11-26 20:35:54.264768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.973 [2024-11-26 20:35:54.264772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.973 [2024-11-26 20:35:54.264783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.973 [2024-11-26 20:35:54.264788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.973 [2024-11-26 20:35:54.264792] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.973 [2024-11-26 20:35:54.264799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.973 [2024-11-26 20:35:54.264816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.973 [2024-11-26 20:35:54.264864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.973 [2024-11-26 20:35:54.264870] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.973 [2024-11-26 20:35:54.264874] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.973 [2024-11-26 20:35:54.264879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.973 [2024-11-26 20:35:54.264889] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.973 [2024-11-26 20:35:54.264894] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.973 [2024-11-26 20:35:54.264898] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.973 [2024-11-26 20:35:54.264905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.973 [2024-11-26 20:35:54.264922] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.973 [2024-11-26 20:35:54.264975] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.973 [2024-11-26 20:35:54.264982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.973 [2024-11-26 20:35:54.264986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.973 [2024-11-26 20:35:54.264990] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.973 [2024-11-26 20:35:54.265001] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.973 [2024-11-26 20:35:54.265006] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.973 [2024-11-26 20:35:54.265010] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.973 [2024-11-26 20:35:54.265017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.974 [2024-11-26 20:35:54.265035] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.974 [2024-11-26 20:35:54.265079] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.974 [2024-11-26 20:35:54.265085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.974 [2024-11-26 20:35:54.265089] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.265094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.974 [2024-11-26 20:35:54.265104] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.265109] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.265113] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.974 [2024-11-26 20:35:54.265121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.974 [2024-11-26 20:35:54.265138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.974 [2024-11-26 20:35:54.265188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.974 [2024-11-26 20:35:54.265195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.974 [2024-11-26 20:35:54.265199] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.265203] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.974 [2024-11-26 20:35:54.265214] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.265231] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.265237] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.974 [2024-11-26 20:35:54.265244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.974 [2024-11-26 20:35:54.265264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.974 [2024-11-26 20:35:54.265318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.974 [2024-11-26 20:35:54.265335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.974 [2024-11-26 20:35:54.265340] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.265345] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.974 [2024-11-26 20:35:54.265357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.265362] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.265366] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.974 [2024-11-26 20:35:54.265373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.974 [2024-11-26 20:35:54.265392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.974 [2024-11-26 20:35:54.265446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.974 [2024-11-26 20:35:54.265460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.974 [2024-11-26 20:35:54.265464] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.265469] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.974 [2024-11-26 20:35:54.265480] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.265485] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.265489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.974 [2024-11-26 20:35:54.265497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.974 [2024-11-26 20:35:54.265515] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.974 [2024-11-26 20:35:54.265564] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.974 [2024-11-26 20:35:54.265571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.974 [2024-11-26 20:35:54.265575] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.265579] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.974 [2024-11-26 20:35:54.265590] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.265595] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.265599] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.974 [2024-11-26 20:35:54.265606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.974 [2024-11-26 20:35:54.265623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.974 [2024-11-26 20:35:54.265670] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.974 [2024-11-26 20:35:54.265677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.974 [2024-11-26 20:35:54.265681] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.265685] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.974 [2024-11-26 20:35:54.265696] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.265701] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.265705] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.974 [2024-11-26 20:35:54.265712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.974 [2024-11-26 20:35:54.265729] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.974 [2024-11-26 20:35:54.265773] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.974 [2024-11-26 20:35:54.265780] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.974 [2024-11-26 20:35:54.265784] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.265788] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.974 [2024-11-26 20:35:54.265799] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.265803] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.265807] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.974 [2024-11-26 20:35:54.265815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.974 [2024-11-26 20:35:54.265832] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.974 [2024-11-26 20:35:54.265881] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.974 [2024-11-26 20:35:54.265888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.974 [2024-11-26 20:35:54.265892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.265896] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.974 [2024-11-26 20:35:54.265907] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.265912] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.265916] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.974 [2024-11-26 20:35:54.265923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.974 [2024-11-26 20:35:54.265940] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.974 [2024-11-26 20:35:54.265987] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.974 [2024-11-26 20:35:54.265998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.974 [2024-11-26 20:35:54.266003] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.266007] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.974 [2024-11-26 20:35:54.266018] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.266023] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.266027] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.974 [2024-11-26 20:35:54.266035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.974 [2024-11-26 20:35:54.266053] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.974 [2024-11-26 20:35:54.266100] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.974 [2024-11-26 20:35:54.266112] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.974 [2024-11-26 20:35:54.266116] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.266120] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.974 [2024-11-26 20:35:54.266131] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.266137] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.266141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.974 [2024-11-26 20:35:54.266148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.974 [2024-11-26 20:35:54.266166] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.974 [2024-11-26 20:35:54.266214] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.974 [2024-11-26 20:35:54.266232] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.974 [2024-11-26 20:35:54.266237] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.266242] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.974 [2024-11-26 20:35:54.266254] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.266259] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.266263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.974 [2024-11-26 20:35:54.266271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.974 [2024-11-26 20:35:54.266291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.974 [2024-11-26 20:35:54.266338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.974 [2024-11-26 20:35:54.266345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.974 [2024-11-26 20:35:54.266349] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.266353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.974 [2024-11-26 20:35:54.266364] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.266369] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.974 [2024-11-26 20:35:54.266373] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.974 [2024-11-26 20:35:54.266380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.975 [2024-11-26 20:35:54.266397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.975 [2024-11-26 20:35:54.266447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.975 [2024-11-26 20:35:54.266455] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.975 [2024-11-26 20:35:54.266458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.266463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.975 [2024-11-26 20:35:54.266473] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.266478] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.266482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.975 [2024-11-26 20:35:54.266490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.975 [2024-11-26 20:35:54.266506] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.975 [2024-11-26 20:35:54.266553] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.975 [2024-11-26 20:35:54.266560] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.975 [2024-11-26 20:35:54.266564] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.266569] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.975 [2024-11-26 20:35:54.266579] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.266584] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.266588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.975 [2024-11-26 20:35:54.266596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.975 [2024-11-26 20:35:54.266613] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.975 [2024-11-26 20:35:54.266660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.975 [2024-11-26 20:35:54.266667] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.975 [2024-11-26 20:35:54.266671] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.266675] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.975 [2024-11-26 20:35:54.266686] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.266691] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.266695] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.975 [2024-11-26 20:35:54.266703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.975 [2024-11-26 20:35:54.266719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.975 [2024-11-26 20:35:54.266765] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.975 [2024-11-26 20:35:54.266771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.975 [2024-11-26 20:35:54.266775] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.266780] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.975 [2024-11-26 20:35:54.266790] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.266795] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.266799] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.975 [2024-11-26 20:35:54.266806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.975 [2024-11-26 20:35:54.266823] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.975 [2024-11-26 20:35:54.266871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.975 [2024-11-26 20:35:54.266882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.975 [2024-11-26 20:35:54.266886] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.266891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.975 [2024-11-26 20:35:54.266902] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.266907] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.266911] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.975 [2024-11-26 20:35:54.266918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.975 [2024-11-26 20:35:54.266936] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.975 [2024-11-26 20:35:54.266983] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.975 [2024-11-26 20:35:54.266991] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.975 [2024-11-26 20:35:54.266994] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.266999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.975 [2024-11-26 20:35:54.267010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.267015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.267018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.975 [2024-11-26 20:35:54.267026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.975 [2024-11-26 20:35:54.267043] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.975 [2024-11-26 20:35:54.267093] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.975 [2024-11-26 20:35:54.267100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.975 [2024-11-26 20:35:54.267104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.267108] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.975 [2024-11-26 20:35:54.267119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.267124] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.267127] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.975 [2024-11-26 20:35:54.267135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.975 [2024-11-26 20:35:54.267152] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.975 [2024-11-26 20:35:54.267196] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.975 [2024-11-26 20:35:54.267207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.975 [2024-11-26 20:35:54.267212] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.267216] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.975 [2024-11-26 20:35:54.267239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.267244] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.267248] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.975 [2024-11-26 20:35:54.267256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.975 [2024-11-26 20:35:54.267276] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.975 [2024-11-26 20:35:54.267326] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.975 [2024-11-26 20:35:54.267334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.975 [2024-11-26 20:35:54.267337] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.267342] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.975 [2024-11-26 20:35:54.267353] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.267357] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.267361] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.975 [2024-11-26 20:35:54.267369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.975 [2024-11-26 20:35:54.267386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.975 [2024-11-26 20:35:54.267436] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.975 [2024-11-26 20:35:54.267443] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.975 [2024-11-26 20:35:54.267447] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.267451] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.975 [2024-11-26 20:35:54.267461] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.267466] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.267470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.975 [2024-11-26 20:35:54.267478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.975 [2024-11-26 20:35:54.267494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.975 [2024-11-26 20:35:54.267542] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.975 [2024-11-26 20:35:54.267549] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.975 [2024-11-26 20:35:54.267553] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.267557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.975 [2024-11-26 20:35:54.267568] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.267572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.267576] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.975 [2024-11-26 20:35:54.267584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.975 [2024-11-26 20:35:54.267601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.975 [2024-11-26 20:35:54.267651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.975 [2024-11-26 20:35:54.267658] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.975 [2024-11-26 20:35:54.267661] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.267666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.975 [2024-11-26 20:35:54.267687] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.975 [2024-11-26 20:35:54.267692] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.976 [2024-11-26 20:35:54.267696] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.976 [2024-11-26 20:35:54.267704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.976 [2024-11-26 20:35:54.267722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.976 [2024-11-26 20:35:54.267776] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.976 [2024-11-26 20:35:54.267787] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.976 [2024-11-26 20:35:54.267791] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.976 [2024-11-26 20:35:54.267796] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.976 [2024-11-26 20:35:54.267807] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.976 [2024-11-26 20:35:54.267812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.976 [2024-11-26 20:35:54.267816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.976 [2024-11-26 20:35:54.267824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.976 [2024-11-26 20:35:54.267842] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.976 [2024-11-26 20:35:54.267887] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.976 [2024-11-26 20:35:54.267898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.976 [2024-11-26 20:35:54.267902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.976 [2024-11-26 20:35:54.267907] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.976 [2024-11-26 20:35:54.267918] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.976 [2024-11-26 20:35:54.267923] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.976 [2024-11-26 20:35:54.267927] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.976 [2024-11-26 20:35:54.267935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.976 [2024-11-26 20:35:54.267952] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.976 [2024-11-26 20:35:54.267996] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.976 [2024-11-26 20:35:54.268003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.976 [2024-11-26 20:35:54.268007] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.976 [2024-11-26 20:35:54.268011] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.976 [2024-11-26 20:35:54.268022] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.976 [2024-11-26 20:35:54.268027] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.976 [2024-11-26 20:35:54.268031] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.976 [2024-11-26 20:35:54.268038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.976 [2024-11-26 20:35:54.268055] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.976 [2024-11-26 20:35:54.268099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.976 [2024-11-26 20:35:54.268106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.976 [2024-11-26 20:35:54.268110] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.976 [2024-11-26 20:35:54.268114] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.976 [2024-11-26 20:35:54.268125] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.976 [2024-11-26 20:35:54.268130] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.976 [2024-11-26 20:35:54.268134] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.976 [2024-11-26 20:35:54.268142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.976 [2024-11-26 20:35:54.268159] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.976 [2024-11-26 20:35:54.268203] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.976 [2024-11-26 20:35:54.268209] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.976 [2024-11-26 20:35:54.268213] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.976 [2024-11-26 20:35:54.268217] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.976 [2024-11-26 20:35:54.272256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.976 [2024-11-26 20:35:54.272265] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.976 [2024-11-26 20:35:54.272269] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df4750) 00:14:53.976 [2024-11-26 20:35:54.272278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.976 [2024-11-26 20:35:54.272303] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e58bc0, cid 3, qid 0 00:14:53.976 [2024-11-26 20:35:54.272361] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.976 [2024-11-26 20:35:54.272369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.976 [2024-11-26 20:35:54.272373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.976 [2024-11-26 20:35:54.272377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e58bc0) on tqpair=0x1df4750 00:14:53.976 [2024-11-26 20:35:54.272386] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:14:53.976 0% 00:14:53.976 Data Units Read: 0 00:14:53.976 Data Units Written: 0 00:14:53.976 Host Read Commands: 0 00:14:53.976 Host Write Commands: 0 00:14:53.976 Controller Busy Time: 0 minutes 00:14:53.976 Power Cycles: 0 00:14:53.976 Power On Hours: 0 hours 00:14:53.976 Unsafe Shutdowns: 0 00:14:53.976 Unrecoverable Media Errors: 0 00:14:53.976 Lifetime Error Log Entries: 0 00:14:53.976 Warning Temperature Time: 0 minutes 00:14:53.976 Critical Temperature Time: 0 minutes 00:14:53.976 00:14:53.976 Number of Queues 00:14:53.976 ================ 00:14:53.976 Number of I/O Submission Queues: 127 00:14:53.976 Number of I/O Completion Queues: 127 00:14:53.976 00:14:53.976 Active Namespaces 00:14:53.976 ================= 00:14:53.976 Namespace ID:1 00:14:53.976 Error Recovery Timeout: Unlimited 00:14:53.976 Command Set Identifier: NVM (00h) 00:14:53.976 Deallocate: Supported 00:14:53.976 Deallocated/Unwritten Error: Not Supported 00:14:53.976 Deallocated Read Value: Unknown 00:14:53.976 Deallocate in Write Zeroes: Not Supported 00:14:53.976 Deallocated Guard Field: 0xFFFF 00:14:53.976 Flush: Supported 00:14:53.976 Reservation: Supported 00:14:53.976 Namespace Sharing Capabilities: Multiple Controllers 00:14:53.976 Size (in LBAs): 131072 (0GiB) 00:14:53.976 Capacity (in LBAs): 131072 (0GiB) 00:14:53.976 Utilization (in LBAs): 131072 (0GiB) 00:14:53.976 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:53.976 EUI64: ABCDEF0123456789 00:14:53.976 UUID: 8962f13d-374d-417d-8505-bd2efd325071 00:14:53.976 Thin Provisioning: Not Supported 00:14:53.976 Per-NS Atomic Units: Yes 00:14:53.976 Atomic Boundary Size (Normal): 0 00:14:53.976 Atomic Boundary Size (PFail): 0 00:14:53.976 Atomic Boundary Offset: 0 00:14:53.976 Maximum Single Source Range Length: 65535 00:14:53.976 Maximum Copy Length: 65535 00:14:53.976 Maximum Source Range Count: 1 00:14:53.976 NGUID/EUI64 Never Reused: No 00:14:53.976 Namespace Write Protected: No 00:14:53.976 Number of LBA Formats: 1 00:14:53.976 Current LBA Format: LBA Format #00 00:14:53.976 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:53.976 00:14:53.976 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:14:54.235 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:54.235 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.235 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:54.235 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.235 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:54.235 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:14:54.236 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:54.236 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:14:54.236 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:54.236 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:14:54.236 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:54.236 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:54.236 rmmod nvme_tcp 00:14:54.236 rmmod nvme_fabrics 00:14:54.236 rmmod nvme_keyring 00:14:54.236 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:54.236 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:14:54.236 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:14:54.236 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 74293 ']' 00:14:54.236 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 74293 00:14:54.236 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 74293 ']' 00:14:54.236 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 74293 00:14:54.236 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:14:54.236 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:54.236 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74293 00:14:54.236 killing process with pid 74293 00:14:54.236 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:54.236 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:54.236 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74293' 00:14:54.236 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 74293 00:14:54.236 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 74293 00:14:54.496 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:54.496 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:54.496 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:54.496 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:14:54.496 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:14:54.496 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:54.496 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:14:54.496 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:54.496 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:54.496 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:54.496 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:54.496 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:54.496 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:54.496 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:54.496 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:54.496 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:54.496 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:54.496 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:54.496 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:54.756 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:54.756 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:54.756 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:54.756 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:54.756 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.756 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.756 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.756 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:14:54.756 00:14:54.756 real 0m2.338s 00:14:54.756 user 0m4.733s 00:14:54.756 sys 0m0.747s 00:14:54.756 ************************************ 00:14:54.756 END TEST nvmf_identify 00:14:54.756 ************************************ 00:14:54.756 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.756 20:35:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:54.756 20:35:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:54.756 20:35:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:54.756 20:35:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:54.756 20:35:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:54.756 ************************************ 00:14:54.756 START TEST nvmf_perf 00:14:54.756 ************************************ 00:14:54.756 20:35:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:54.756 * Looking for test storage... 00:14:54.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:54.756 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:54.756 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:14:54.756 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:55.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.016 --rc genhtml_branch_coverage=1 00:14:55.016 --rc genhtml_function_coverage=1 00:14:55.016 --rc genhtml_legend=1 00:14:55.016 --rc geninfo_all_blocks=1 00:14:55.016 --rc geninfo_unexecuted_blocks=1 00:14:55.016 00:14:55.016 ' 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:55.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.016 --rc genhtml_branch_coverage=1 00:14:55.016 --rc genhtml_function_coverage=1 00:14:55.016 --rc genhtml_legend=1 00:14:55.016 --rc geninfo_all_blocks=1 00:14:55.016 --rc geninfo_unexecuted_blocks=1 00:14:55.016 00:14:55.016 ' 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:55.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.016 --rc genhtml_branch_coverage=1 00:14:55.016 --rc genhtml_function_coverage=1 00:14:55.016 --rc genhtml_legend=1 00:14:55.016 --rc geninfo_all_blocks=1 00:14:55.016 --rc geninfo_unexecuted_blocks=1 00:14:55.016 00:14:55.016 ' 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:55.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.016 --rc genhtml_branch_coverage=1 00:14:55.016 --rc genhtml_function_coverage=1 00:14:55.016 --rc genhtml_legend=1 00:14:55.016 --rc geninfo_all_blocks=1 00:14:55.016 --rc geninfo_unexecuted_blocks=1 00:14:55.016 00:14:55.016 ' 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:55.016 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:55.016 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:55.017 Cannot find device "nvmf_init_br" 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:55.017 Cannot find device "nvmf_init_br2" 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:55.017 Cannot find device "nvmf_tgt_br" 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:55.017 Cannot find device "nvmf_tgt_br2" 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:55.017 Cannot find device "nvmf_init_br" 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:55.017 Cannot find device "nvmf_init_br2" 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:55.017 Cannot find device "nvmf_tgt_br" 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:55.017 Cannot find device "nvmf_tgt_br2" 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:55.017 Cannot find device "nvmf_br" 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:55.017 Cannot find device "nvmf_init_if" 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:55.017 Cannot find device "nvmf_init_if2" 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:55.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:55.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:55.017 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:55.276 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:55.535 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:55.535 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:14:55.535 00:14:55.535 --- 10.0.0.3 ping statistics --- 00:14:55.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.535 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:55.535 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:55.535 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.102 ms 00:14:55.535 00:14:55.535 --- 10.0.0.4 ping statistics --- 00:14:55.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.535 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:55.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:55.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:14:55.535 00:14:55.535 --- 10.0.0.1 ping statistics --- 00:14:55.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.535 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:55.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:55.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:14:55.535 00:14:55.535 --- 10.0.0.2 ping statistics --- 00:14:55.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.535 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74552 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74552 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74552 ']' 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:55.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:55.535 20:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:55.535 [2024-11-26 20:35:55.741326] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:14:55.535 [2024-11-26 20:35:55.741935] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.794 [2024-11-26 20:35:55.901507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:55.794 [2024-11-26 20:35:55.971385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:55.794 [2024-11-26 20:35:55.971459] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:55.794 [2024-11-26 20:35:55.971474] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:55.794 [2024-11-26 20:35:55.971484] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:55.794 [2024-11-26 20:35:55.971493] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:55.794 [2024-11-26 20:35:55.972874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.794 [2024-11-26 20:35:55.972993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.794 [2024-11-26 20:35:55.973299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:55.794 [2024-11-26 20:35:55.973431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.794 [2024-11-26 20:35:56.031429] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:56.727 20:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:56.727 20:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:14:56.727 20:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:56.727 20:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:56.727 20:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:56.727 20:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:56.727 20:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:56.727 20:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:56.986 20:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:56.986 20:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:57.244 20:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:14:57.244 20:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:57.502 20:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:57.502 20:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:14:57.502 20:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:57.502 20:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:57.502 20:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:57.760 [2024-11-26 20:35:58.009171] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.760 20:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:58.019 20:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:58.019 20:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:58.277 20:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:58.277 20:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:58.535 20:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:58.793 [2024-11-26 20:35:59.054552] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:58.793 20:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:59.051 20:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:14:59.051 20:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:59.051 20:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:59.051 20:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:00.429 Initializing NVMe Controllers 00:15:00.429 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:00.429 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:00.429 Initialization complete. Launching workers. 00:15:00.429 ======================================================== 00:15:00.429 Latency(us) 00:15:00.429 Device Information : IOPS MiB/s Average min max 00:15:00.429 PCIE (0000:00:10.0) NSID 1 from core 0: 23928.32 93.47 1337.41 361.85 6443.13 00:15:00.429 ======================================================== 00:15:00.429 Total : 23928.32 93.47 1337.41 361.85 6443.13 00:15:00.429 00:15:00.429 20:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:01.393 Initializing NVMe Controllers 00:15:01.393 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:01.393 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:01.393 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:01.393 Initialization complete. Launching workers. 00:15:01.393 ======================================================== 00:15:01.393 Latency(us) 00:15:01.393 Device Information : IOPS MiB/s Average min max 00:15:01.393 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3579.80 13.98 278.96 103.42 5200.39 00:15:01.393 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.99 0.48 8120.76 6066.28 12027.43 00:15:01.393 ======================================================== 00:15:01.393 Total : 3703.79 14.47 541.48 103.42 12027.43 00:15:01.393 00:15:01.651 20:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:03.028 Initializing NVMe Controllers 00:15:03.028 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:03.028 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:03.028 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:03.028 Initialization complete. Launching workers. 00:15:03.028 ======================================================== 00:15:03.028 Latency(us) 00:15:03.028 Device Information : IOPS MiB/s Average min max 00:15:03.028 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8778.17 34.29 3649.24 537.68 8213.05 00:15:03.028 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3999.16 15.62 8052.69 6553.05 14887.94 00:15:03.028 ======================================================== 00:15:03.028 Total : 12777.33 49.91 5027.47 537.68 14887.94 00:15:03.028 00:15:03.028 20:36:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:03.028 20:36:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:05.563 Initializing NVMe Controllers 00:15:05.563 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:05.563 Controller IO queue size 128, less than required. 00:15:05.563 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:05.563 Controller IO queue size 128, less than required. 00:15:05.563 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:05.563 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:05.563 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:05.563 Initialization complete. Launching workers. 00:15:05.563 ======================================================== 00:15:05.563 Latency(us) 00:15:05.563 Device Information : IOPS MiB/s Average min max 00:15:05.563 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1613.97 403.49 80719.13 42748.45 114080.20 00:15:05.563 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 676.97 169.24 198774.16 65413.80 300036.52 00:15:05.563 ======================================================== 00:15:05.563 Total : 2290.93 572.73 115604.28 42748.45 300036.52 00:15:05.563 00:15:05.563 20:36:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:15:05.821 Initializing NVMe Controllers 00:15:05.821 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:05.821 Controller IO queue size 128, less than required. 00:15:05.821 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:05.821 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:05.821 Controller IO queue size 128, less than required. 00:15:05.821 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:05.821 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:05.821 WARNING: Some requested NVMe devices were skipped 00:15:05.821 No valid NVMe controllers or AIO or URING devices found 00:15:05.821 20:36:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:15:08.358 Initializing NVMe Controllers 00:15:08.358 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:08.358 Controller IO queue size 128, less than required. 00:15:08.358 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:08.358 Controller IO queue size 128, less than required. 00:15:08.358 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:08.358 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:08.358 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:08.358 Initialization complete. Launching workers. 00:15:08.358 00:15:08.358 ==================== 00:15:08.358 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:08.358 TCP transport: 00:15:08.358 polls: 9241 00:15:08.358 idle_polls: 5835 00:15:08.358 sock_completions: 3406 00:15:08.358 nvme_completions: 6387 00:15:08.358 submitted_requests: 9594 00:15:08.358 queued_requests: 1 00:15:08.358 00:15:08.358 ==================== 00:15:08.358 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:08.358 TCP transport: 00:15:08.358 polls: 9428 00:15:08.358 idle_polls: 5434 00:15:08.358 sock_completions: 3994 00:15:08.358 nvme_completions: 6823 00:15:08.358 submitted_requests: 10258 00:15:08.358 queued_requests: 1 00:15:08.358 ======================================================== 00:15:08.358 Latency(us) 00:15:08.358 Device Information : IOPS MiB/s Average min max 00:15:08.358 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1596.21 399.05 81396.10 43756.06 132637.05 00:15:08.358 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1705.19 426.30 76218.05 35444.67 123348.00 00:15:08.358 ======================================================== 00:15:08.358 Total : 3301.40 825.35 78721.61 35444.67 132637.05 00:15:08.358 00:15:08.358 20:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:08.617 20:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:08.899 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:08.899 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:08.899 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:08.899 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:08.899 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:15:08.899 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:08.899 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:15:08.899 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:08.899 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:08.899 rmmod nvme_tcp 00:15:08.899 rmmod nvme_fabrics 00:15:08.899 rmmod nvme_keyring 00:15:08.899 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:08.899 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:15:08.899 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:15:08.899 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74552 ']' 00:15:08.899 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74552 00:15:08.899 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74552 ']' 00:15:08.899 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74552 00:15:08.899 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:15:08.899 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:08.899 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74552 00:15:08.899 killing process with pid 74552 00:15:08.899 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:08.899 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:08.899 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74552' 00:15:08.899 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74552 00:15:08.899 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74552 00:15:09.836 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:09.836 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:09.836 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:09.836 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:15:09.836 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:15:09.836 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:15:09.836 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:09.836 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:09.836 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:09.836 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:09.836 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:09.836 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:09.836 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:09.836 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:09.836 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:09.836 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:09.836 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:09.836 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:09.836 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:09.836 20:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:09.836 20:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:09.836 20:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:09.836 20:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:09.836 20:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.836 20:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:09.836 20:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.836 20:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:15:09.836 00:15:09.836 real 0m15.114s 00:15:09.836 user 0m54.309s 00:15:09.836 sys 0m4.131s 00:15:09.836 ************************************ 00:15:09.836 END TEST nvmf_perf 00:15:09.836 ************************************ 00:15:09.836 20:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.836 20:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:09.836 20:36:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:09.836 20:36:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:09.836 20:36:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.836 20:36:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:09.836 ************************************ 00:15:09.836 START TEST nvmf_fio_host 00:15:09.836 ************************************ 00:15:09.836 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:10.096 * Looking for test storage... 00:15:10.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:10.096 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:10.096 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:15:10.096 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:10.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.097 --rc genhtml_branch_coverage=1 00:15:10.097 --rc genhtml_function_coverage=1 00:15:10.097 --rc genhtml_legend=1 00:15:10.097 --rc geninfo_all_blocks=1 00:15:10.097 --rc geninfo_unexecuted_blocks=1 00:15:10.097 00:15:10.097 ' 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:10.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.097 --rc genhtml_branch_coverage=1 00:15:10.097 --rc genhtml_function_coverage=1 00:15:10.097 --rc genhtml_legend=1 00:15:10.097 --rc geninfo_all_blocks=1 00:15:10.097 --rc geninfo_unexecuted_blocks=1 00:15:10.097 00:15:10.097 ' 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:10.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.097 --rc genhtml_branch_coverage=1 00:15:10.097 --rc genhtml_function_coverage=1 00:15:10.097 --rc genhtml_legend=1 00:15:10.097 --rc geninfo_all_blocks=1 00:15:10.097 --rc geninfo_unexecuted_blocks=1 00:15:10.097 00:15:10.097 ' 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:10.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.097 --rc genhtml_branch_coverage=1 00:15:10.097 --rc genhtml_function_coverage=1 00:15:10.097 --rc genhtml_legend=1 00:15:10.097 --rc geninfo_all_blocks=1 00:15:10.097 --rc geninfo_unexecuted_blocks=1 00:15:10.097 00:15:10.097 ' 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.097 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:10.098 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:10.098 Cannot find device "nvmf_init_br" 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:10.098 Cannot find device "nvmf_init_br2" 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:10.098 Cannot find device "nvmf_tgt_br" 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:15:10.098 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:10.357 Cannot find device "nvmf_tgt_br2" 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:10.357 Cannot find device "nvmf_init_br" 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:10.357 Cannot find device "nvmf_init_br2" 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:10.357 Cannot find device "nvmf_tgt_br" 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:10.357 Cannot find device "nvmf_tgt_br2" 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:10.357 Cannot find device "nvmf_br" 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:10.357 Cannot find device "nvmf_init_if" 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:10.357 Cannot find device "nvmf_init_if2" 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:10.357 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:10.357 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:10.357 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:10.635 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:10.635 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:15:10.635 00:15:10.635 --- 10.0.0.3 ping statistics --- 00:15:10.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.635 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:10.635 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:10.635 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:15:10.635 00:15:10.635 --- 10.0.0.4 ping statistics --- 00:15:10.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.635 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:10.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:10.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:10.635 00:15:10.635 --- 10.0.0.1 ping statistics --- 00:15:10.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.635 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:10.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:10.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:15:10.635 00:15:10.635 --- 10.0.0.2 ping statistics --- 00:15:10.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.635 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75019 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75019 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 75019 ']' 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:10.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:10.635 20:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:10.635 [2024-11-26 20:36:10.855557] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:15:10.635 [2024-11-26 20:36:10.855646] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.894 [2024-11-26 20:36:11.013301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:10.894 [2024-11-26 20:36:11.101168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.894 [2024-11-26 20:36:11.101250] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.894 [2024-11-26 20:36:11.101276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:10.894 [2024-11-26 20:36:11.101287] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:10.894 [2024-11-26 20:36:11.101296] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.894 [2024-11-26 20:36:11.102438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.894 [2024-11-26 20:36:11.102511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.894 [2024-11-26 20:36:11.102652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:10.894 [2024-11-26 20:36:11.102658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.894 [2024-11-26 20:36:11.159819] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:10.894 20:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.894 20:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:15:10.894 20:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:11.462 [2024-11-26 20:36:11.535130] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:11.462 20:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:11.462 20:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:11.462 20:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:11.462 20:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:11.721 Malloc1 00:15:11.721 20:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:11.980 20:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:12.238 20:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:12.591 [2024-11-26 20:36:12.786594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:12.591 20:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:12.849 20:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:12.849 20:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:12.849 20:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:12.849 20:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:12.849 20:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:12.849 20:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:12.849 20:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:12.849 20:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:15:12.849 20:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:12.849 20:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:12.849 20:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:12.849 20:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:15:12.849 20:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:12.849 20:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:12.849 20:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:12.849 20:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:12.849 20:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:12.849 20:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:15:12.849 20:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:12.849 20:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:12.849 20:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:12.849 20:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:12.849 20:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:13.108 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:13.108 fio-3.35 00:15:13.108 Starting 1 thread 00:15:15.641 00:15:15.641 test: (groupid=0, jobs=1): err= 0: pid=75094: Tue Nov 26 20:36:15 2024 00:15:15.641 read: IOPS=8677, BW=33.9MiB/s (35.5MB/s)(68.0MiB/2007msec) 00:15:15.641 slat (usec): min=2, max=315, avg= 2.59, stdev= 3.11 00:15:15.641 clat (usec): min=2500, max=13990, avg=7689.15, stdev=589.90 00:15:15.641 lat (usec): min=2540, max=13992, avg=7691.73, stdev=589.63 00:15:15.641 clat percentiles (usec): 00:15:15.641 | 1.00th=[ 6587], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7242], 00:15:15.641 | 30.00th=[ 7439], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:15:15.641 | 70.00th=[ 7898], 80.00th=[ 8029], 90.00th=[ 8291], 95.00th=[ 8455], 00:15:15.641 | 99.00th=[ 9634], 99.50th=[10159], 99.90th=[12387], 99.95th=[13173], 00:15:15.641 | 99.99th=[13960] 00:15:15.641 bw ( KiB/s): min=33960, max=35176, per=99.96%, avg=34696.00, stdev=528.40, samples=4 00:15:15.641 iops : min= 8490, max= 8794, avg=8674.00, stdev=132.10, samples=4 00:15:15.641 write: IOPS=8668, BW=33.9MiB/s (35.5MB/s)(68.0MiB/2007msec); 0 zone resets 00:15:15.641 slat (usec): min=2, max=258, avg= 2.68, stdev= 2.22 00:15:15.641 clat (usec): min=2365, max=13223, avg=7012.16, stdev=530.85 00:15:15.641 lat (usec): min=2379, max=13225, avg=7014.84, stdev=530.70 00:15:15.641 clat percentiles (usec): 00:15:15.641 | 1.00th=[ 5997], 5.00th=[ 6325], 10.00th=[ 6456], 20.00th=[ 6652], 00:15:15.641 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 6980], 60.00th=[ 7111], 00:15:15.641 | 70.00th=[ 7177], 80.00th=[ 7308], 90.00th=[ 7570], 95.00th=[ 7767], 00:15:15.641 | 99.00th=[ 8848], 99.50th=[ 9372], 99.90th=[11207], 99.95th=[12256], 00:15:15.641 | 99.99th=[13173] 00:15:15.641 bw ( KiB/s): min=33920, max=34976, per=100.00%, avg=34674.00, stdev=504.04, samples=4 00:15:15.641 iops : min= 8480, max= 8744, avg=8668.50, stdev=126.01, samples=4 00:15:15.641 lat (msec) : 4=0.09%, 10=99.51%, 20=0.40% 00:15:15.641 cpu : usr=69.54%, sys=23.43%, ctx=7, majf=0, minf=7 00:15:15.641 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:15.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:15.641 issued rwts: total=17415,17397,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.641 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:15.641 00:15:15.641 Run status group 0 (all jobs): 00:15:15.641 READ: bw=33.9MiB/s (35.5MB/s), 33.9MiB/s-33.9MiB/s (35.5MB/s-35.5MB/s), io=68.0MiB (71.3MB), run=2007-2007msec 00:15:15.641 WRITE: bw=33.9MiB/s (35.5MB/s), 33.9MiB/s-33.9MiB/s (35.5MB/s-35.5MB/s), io=68.0MiB (71.3MB), run=2007-2007msec 00:15:15.641 20:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:15.641 20:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:15.641 20:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:15.641 20:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:15.641 20:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:15.641 20:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:15.641 20:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:15:15.641 20:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:15.641 20:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:15.641 20:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:15:15.641 20:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:15.641 20:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:15.641 20:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:15.641 20:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:15.641 20:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:15.641 20:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:15.641 20:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:15.641 20:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:15:15.641 20:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:15.641 20:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:15.641 20:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:15.641 20:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:15.641 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:15.641 fio-3.35 00:15:15.641 Starting 1 thread 00:15:18.179 00:15:18.179 test: (groupid=0, jobs=1): err= 0: pid=75143: Tue Nov 26 20:36:18 2024 00:15:18.179 read: IOPS=8424, BW=132MiB/s (138MB/s)(264MiB/2008msec) 00:15:18.179 slat (usec): min=3, max=117, avg= 3.66, stdev= 1.86 00:15:18.179 clat (usec): min=2552, max=17129, avg=8439.05, stdev=2484.37 00:15:18.179 lat (usec): min=2556, max=17133, avg=8442.71, stdev=2484.40 00:15:18.179 clat percentiles (usec): 00:15:18.179 | 1.00th=[ 4015], 5.00th=[ 4817], 10.00th=[ 5342], 20.00th=[ 6194], 00:15:18.179 | 30.00th=[ 6915], 40.00th=[ 7570], 50.00th=[ 8225], 60.00th=[ 8979], 00:15:18.179 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[11600], 95.00th=[12649], 00:15:18.179 | 99.00th=[15533], 99.50th=[16057], 99.90th=[16909], 99.95th=[16909], 00:15:18.179 | 99.99th=[17171] 00:15:18.179 bw ( KiB/s): min=54752, max=79265, per=50.74%, avg=68392.25, stdev=10889.31, samples=4 00:15:18.179 iops : min= 3422, max= 4954, avg=4274.50, stdev=680.56, samples=4 00:15:18.179 write: IOPS=4931, BW=77.1MiB/s (80.8MB/s)(140MiB/1812msec); 0 zone resets 00:15:18.179 slat (usec): min=34, max=334, avg=37.81, stdev= 6.83 00:15:18.179 clat (usec): min=5493, max=19663, avg=11907.05, stdev=2056.22 00:15:18.179 lat (usec): min=5529, max=19699, avg=11944.86, stdev=2055.72 00:15:18.179 clat percentiles (usec): 00:15:18.179 | 1.00th=[ 7963], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10159], 00:15:18.179 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11600], 60.00th=[12256], 00:15:18.179 | 70.00th=[12911], 80.00th=[13566], 90.00th=[14615], 95.00th=[15664], 00:15:18.179 | 99.00th=[17695], 99.50th=[17957], 99.90th=[19006], 99.95th=[19006], 00:15:18.179 | 99.99th=[19792] 00:15:18.179 bw ( KiB/s): min=58176, max=81692, per=90.00%, avg=71015.00, stdev=10494.10, samples=4 00:15:18.179 iops : min= 3636, max= 5105, avg=4438.25, stdev=655.63, samples=4 00:15:18.179 lat (msec) : 4=0.65%, 10=52.96%, 20=46.39% 00:15:18.179 cpu : usr=82.61%, sys=13.30%, ctx=5, majf=0, minf=14 00:15:18.179 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:18.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:18.179 issued rwts: total=16917,8936,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.179 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:18.179 00:15:18.179 Run status group 0 (all jobs): 00:15:18.179 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=264MiB (277MB), run=2008-2008msec 00:15:18.179 WRITE: bw=77.1MiB/s (80.8MB/s), 77.1MiB/s-77.1MiB/s (80.8MB/s-80.8MB/s), io=140MiB (146MB), run=1812-1812msec 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:18.179 rmmod nvme_tcp 00:15:18.179 rmmod nvme_fabrics 00:15:18.179 rmmod nvme_keyring 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 75019 ']' 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 75019 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 75019 ']' 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 75019 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75019 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:18.179 killing process with pid 75019 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75019' 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 75019 00:15:18.179 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 75019 00:15:18.438 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:18.438 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:18.438 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:18.438 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:15:18.438 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:18.439 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:15:18.439 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:15:18.439 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:18.439 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:18.439 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:18.439 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:18.439 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:18.698 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:18.698 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:18.698 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:18.698 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:18.698 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:18.698 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:18.698 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:18.698 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:18.698 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:18.698 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:18.698 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:18.698 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.698 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:18.698 20:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.698 20:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:15:18.698 00:15:18.698 real 0m8.846s 00:15:18.698 user 0m35.105s 00:15:18.698 sys 0m2.400s 00:15:18.698 20:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.698 ************************************ 00:15:18.698 END TEST nvmf_fio_host 00:15:18.698 ************************************ 00:15:18.699 20:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.959 ************************************ 00:15:18.959 START TEST nvmf_failover 00:15:18.959 ************************************ 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:18.959 * Looking for test storage... 00:15:18.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:18.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.959 --rc genhtml_branch_coverage=1 00:15:18.959 --rc genhtml_function_coverage=1 00:15:18.959 --rc genhtml_legend=1 00:15:18.959 --rc geninfo_all_blocks=1 00:15:18.959 --rc geninfo_unexecuted_blocks=1 00:15:18.959 00:15:18.959 ' 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:18.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.959 --rc genhtml_branch_coverage=1 00:15:18.959 --rc genhtml_function_coverage=1 00:15:18.959 --rc genhtml_legend=1 00:15:18.959 --rc geninfo_all_blocks=1 00:15:18.959 --rc geninfo_unexecuted_blocks=1 00:15:18.959 00:15:18.959 ' 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:18.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.959 --rc genhtml_branch_coverage=1 00:15:18.959 --rc genhtml_function_coverage=1 00:15:18.959 --rc genhtml_legend=1 00:15:18.959 --rc geninfo_all_blocks=1 00:15:18.959 --rc geninfo_unexecuted_blocks=1 00:15:18.959 00:15:18.959 ' 00:15:18.959 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:18.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.959 --rc genhtml_branch_coverage=1 00:15:18.959 --rc genhtml_function_coverage=1 00:15:18.960 --rc genhtml_legend=1 00:15:18.960 --rc geninfo_all_blocks=1 00:15:18.960 --rc geninfo_unexecuted_blocks=1 00:15:18.960 00:15:18.960 ' 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:18.960 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:18.960 Cannot find device "nvmf_init_br" 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:18.960 Cannot find device "nvmf_init_br2" 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:18.960 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:19.226 Cannot find device "nvmf_tgt_br" 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:19.226 Cannot find device "nvmf_tgt_br2" 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:19.226 Cannot find device "nvmf_init_br" 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:19.226 Cannot find device "nvmf_init_br2" 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:19.226 Cannot find device "nvmf_tgt_br" 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:19.226 Cannot find device "nvmf_tgt_br2" 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:19.226 Cannot find device "nvmf_br" 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:19.226 Cannot find device "nvmf_init_if" 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:19.226 Cannot find device "nvmf_init_if2" 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:19.226 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:19.226 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:19.226 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:19.484 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:19.484 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:19.484 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:19.484 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:19.484 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:19.484 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:19.485 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:19.485 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:15:19.485 00:15:19.485 --- 10.0.0.3 ping statistics --- 00:15:19.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.485 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:19.485 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:19.485 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:15:19.485 00:15:19.485 --- 10.0.0.4 ping statistics --- 00:15:19.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.485 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:19.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:19.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:15:19.485 00:15:19.485 --- 10.0.0.1 ping statistics --- 00:15:19.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.485 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:19.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:19.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:15:19.485 00:15:19.485 --- 10.0.0.2 ping statistics --- 00:15:19.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.485 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75402 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75402 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75402 ']' 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:19.485 20:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:19.485 [2024-11-26 20:36:19.720801] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:15:19.485 [2024-11-26 20:36:19.720879] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.743 [2024-11-26 20:36:19.864174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:19.743 [2024-11-26 20:36:19.920023] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.743 [2024-11-26 20:36:19.920081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.743 [2024-11-26 20:36:19.920108] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.743 [2024-11-26 20:36:19.920116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.743 [2024-11-26 20:36:19.920124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.743 [2024-11-26 20:36:19.921314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:19.743 [2024-11-26 20:36:19.921397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.743 [2024-11-26 20:36:19.921398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:19.743 [2024-11-26 20:36:19.975082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:19.743 20:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:19.743 20:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:15:19.743 20:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:19.743 20:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:19.743 20:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:19.743 20:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.743 20:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:20.309 [2024-11-26 20:36:20.378054] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:20.310 20:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:20.567 Malloc0 00:15:20.567 20:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:20.824 20:36:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:21.082 20:36:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:21.339 [2024-11-26 20:36:21.549433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:21.339 20:36:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:21.596 [2024-11-26 20:36:21.797607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:21.596 20:36:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:21.854 [2024-11-26 20:36:22.057888] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:21.854 20:36:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75458 00:15:21.854 20:36:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:21.854 20:36:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:21.854 20:36:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75458 /var/tmp/bdevperf.sock 00:15:21.854 20:36:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75458 ']' 00:15:21.854 20:36:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:21.854 20:36:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:21.854 20:36:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:21.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:21.854 20:36:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:21.854 20:36:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:22.421 20:36:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:22.421 20:36:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:15:22.421 20:36:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:22.678 NVMe0n1 00:15:22.678 20:36:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:22.936 00:15:22.936 20:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75474 00:15:22.936 20:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:22.936 20:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:23.870 20:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:24.436 20:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:27.723 20:36:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:27.723 00:15:27.723 20:36:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:27.981 20:36:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:31.390 20:36:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:31.390 [2024-11-26 20:36:31.516446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:31.390 20:36:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:32.324 20:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:32.583 20:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75474 00:15:39.167 { 00:15:39.167 "results": [ 00:15:39.167 { 00:15:39.167 "job": "NVMe0n1", 00:15:39.167 "core_mask": "0x1", 00:15:39.167 "workload": "verify", 00:15:39.167 "status": "finished", 00:15:39.167 "verify_range": { 00:15:39.167 "start": 0, 00:15:39.167 "length": 16384 00:15:39.167 }, 00:15:39.167 "queue_depth": 128, 00:15:39.167 "io_size": 4096, 00:15:39.167 "runtime": 15.008625, 00:15:39.167 "iops": 9030.740657455297, 00:15:39.167 "mibps": 35.276330693184754, 00:15:39.167 "io_failed": 3469, 00:15:39.167 "io_timeout": 0, 00:15:39.167 "avg_latency_us": 13786.950423572745, 00:15:39.167 "min_latency_us": 636.7418181818182, 00:15:39.167 "max_latency_us": 23116.334545454545 00:15:39.167 } 00:15:39.167 ], 00:15:39.167 "core_count": 1 00:15:39.167 } 00:15:39.167 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75458 00:15:39.167 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75458 ']' 00:15:39.167 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75458 00:15:39.167 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:15:39.168 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:39.168 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75458 00:15:39.168 killing process with pid 75458 00:15:39.168 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:39.168 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:39.168 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75458' 00:15:39.168 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75458 00:15:39.168 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75458 00:15:39.168 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:39.168 [2024-11-26 20:36:22.135857] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:15:39.168 [2024-11-26 20:36:22.135971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75458 ] 00:15:39.168 [2024-11-26 20:36:22.289135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.168 [2024-11-26 20:36:22.357234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.168 [2024-11-26 20:36:22.414664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:39.168 Running I/O for 15 seconds... 00:15:39.168 6933.00 IOPS, 27.08 MiB/s [2024-11-26T20:36:39.523Z] [2024-11-26 20:36:24.493979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.168 [2024-11-26 20:36:24.494049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.494979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.494994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.495007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.495022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.495036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.495051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.495065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.495080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.168 [2024-11-26 20:36:24.495094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.168 [2024-11-26 20:36:24.495110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.495975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.495991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.496005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.496020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.496034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.496050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.496064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.496088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.496104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.496120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.496134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.496150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.496164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.496179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.496194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.496210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.496240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.496258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.496273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.496288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.496303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.496318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.496332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.496348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.169 [2024-11-26 20:36:24.496362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.169 [2024-11-26 20:36:24.496377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.496392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.496407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.496421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.496436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.496451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.496466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.496487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.496504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.496518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.496533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.496547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.496563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.496577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.496592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.496606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.496623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.496637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.496653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.496667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.496683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.496697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.496713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.496727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.496743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.496757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.496774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.496788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.496803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.496817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.496833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.496847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.496863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.496883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.496900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.496914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.496929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.496943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.496959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.496973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.496989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.497003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.497018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.497032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.497047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.497062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.497077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.497091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.497112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.497127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.497143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.497157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.497173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.497187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.497211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.497237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.497254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.497269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.497291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.497306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.497321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.497335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.497351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.497365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.497380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.497393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.497409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.497423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.497439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.497452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.497468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.497482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.497497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.497511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.497526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.497540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.170 [2024-11-26 20:36:24.497555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.170 [2024-11-26 20:36:24.497569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:24.497585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.171 [2024-11-26 20:36:24.497599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:24.497619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.171 [2024-11-26 20:36:24.497633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:24.497649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.171 [2024-11-26 20:36:24.497669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:24.497685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.171 [2024-11-26 20:36:24.497700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:24.497720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.171 [2024-11-26 20:36:24.497734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:24.497750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.171 [2024-11-26 20:36:24.497764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:24.497780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.171 [2024-11-26 20:36:24.497794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:24.497809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.171 [2024-11-26 20:36:24.497823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:24.497839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.171 [2024-11-26 20:36:24.497853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:24.497868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.171 [2024-11-26 20:36:24.497882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:24.497897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.171 [2024-11-26 20:36:24.497911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:24.497927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.171 [2024-11-26 20:36:24.497941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:24.497956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.171 [2024-11-26 20:36:24.497970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:24.497986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.171 [2024-11-26 20:36:24.498000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:24.498016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.171 [2024-11-26 20:36:24.498029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:24.498051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.171 [2024-11-26 20:36:24.498066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:24.498082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.171 [2024-11-26 20:36:24.498096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:24.498125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.171 [2024-11-26 20:36:24.498141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:24.498156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67e00 is same with the state(6) to be set 00:15:39.171 [2024-11-26 20:36:24.498174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.171 [2024-11-26 20:36:24.498185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.171 [2024-11-26 20:36:24.498196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66712 len:8 PRP1 0x0 PRP2 0x0 00:15:39.171 [2024-11-26 20:36:24.498215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:24.498302] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:39.171 [2024-11-26 20:36:24.498363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.171 [2024-11-26 20:36:24.498385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:24.498401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.171 [2024-11-26 20:36:24.498415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:24.498429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.171 [2024-11-26 20:36:24.498443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:24.498457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.171 [2024-11-26 20:36:24.498472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:24.498486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:15:39.171 [2024-11-26 20:36:24.498525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f8c60 (9): Bad file descriptor 00:15:39.171 [2024-11-26 20:36:24.502355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:39.171 [2024-11-26 20:36:24.533739] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:15:39.171 7713.50 IOPS, 30.13 MiB/s [2024-11-26T20:36:39.526Z] 8259.67 IOPS, 32.26 MiB/s [2024-11-26T20:36:39.526Z] 8538.75 IOPS, 33.35 MiB/s [2024-11-26T20:36:39.526Z] [2024-11-26 20:36:28.217666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.171 [2024-11-26 20:36:28.217735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:28.217782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.171 [2024-11-26 20:36:28.217799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:28.217813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.171 [2024-11-26 20:36:28.217827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:28.217842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.171 [2024-11-26 20:36:28.217855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:28.217869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8c60 is same with the state(6) to be set 00:15:39.171 [2024-11-26 20:36:28.217941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.171 [2024-11-26 20:36:28.217964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:28.217987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:88592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.171 [2024-11-26 20:36:28.218004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:28.218021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:88600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.171 [2024-11-26 20:36:28.218035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:28.218051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.171 [2024-11-26 20:36:28.218065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:28.218081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.171 [2024-11-26 20:36:28.218095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:28.218111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:88624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.171 [2024-11-26 20:36:28.218126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-11-26 20:36:28.218141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.172 [2024-11-26 20:36:28.218156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:88640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.172 [2024-11-26 20:36:28.218185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.172 [2024-11-26 20:36:28.218215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.172 [2024-11-26 20:36:28.218277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.172 [2024-11-26 20:36:28.218309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.172 [2024-11-26 20:36:28.218339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.172 [2024-11-26 20:36:28.218371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.172 [2024-11-26 20:36:28.218402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.172 [2024-11-26 20:36:28.218431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.172 [2024-11-26 20:36:28.218461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.172 [2024-11-26 20:36:28.218491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.172 [2024-11-26 20:36:28.218520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.172 [2024-11-26 20:36:28.218550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.172 [2024-11-26 20:36:28.218580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.172 [2024-11-26 20:36:28.218610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.172 [2024-11-26 20:36:28.218639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.172 [2024-11-26 20:36:28.218677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.172 [2024-11-26 20:36:28.218706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:88648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.172 [2024-11-26 20:36:28.218736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.172 [2024-11-26 20:36:28.218765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:88664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.172 [2024-11-26 20:36:28.218795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:88672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.172 [2024-11-26 20:36:28.218825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.172 [2024-11-26 20:36:28.218856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.172 [2024-11-26 20:36:28.218886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:88696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.172 [2024-11-26 20:36:28.218915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.172 [2024-11-26 20:36:28.218945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.172 [2024-11-26 20:36:28.218975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.218990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:88720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.172 [2024-11-26 20:36:28.219004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.219020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.172 [2024-11-26 20:36:28.219040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.219056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:88736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.172 [2024-11-26 20:36:28.219070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.172 [2024-11-26 20:36:28.219086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:88744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.172 [2024-11-26 20:36:28.219100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.173 [2024-11-26 20:36:28.219130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:88760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.173 [2024-11-26 20:36:28.219159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.173 [2024-11-26 20:36:28.219189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.173 [2024-11-26 20:36:28.219229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.173 [2024-11-26 20:36:28.219263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.173 [2024-11-26 20:36:28.219294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.173 [2024-11-26 20:36:28.219324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.173 [2024-11-26 20:36:28.219354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.173 [2024-11-26 20:36:28.219383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.173 [2024-11-26 20:36:28.219413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.173 [2024-11-26 20:36:28.219451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.173 [2024-11-26 20:36:28.219481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.173 [2024-11-26 20:36:28.219511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.173 [2024-11-26 20:36:28.219540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.173 [2024-11-26 20:36:28.219570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:89320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.173 [2024-11-26 20:36:28.219600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.173 [2024-11-26 20:36:28.219630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.173 [2024-11-26 20:36:28.219679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.173 [2024-11-26 20:36:28.219729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:88776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.173 [2024-11-26 20:36:28.219777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.173 [2024-11-26 20:36:28.219819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:88792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.173 [2024-11-26 20:36:28.219849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.173 [2024-11-26 20:36:28.219879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:88808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.173 [2024-11-26 20:36:28.219919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.173 [2024-11-26 20:36:28.219949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.173 [2024-11-26 20:36:28.219979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.219994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:88832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.173 [2024-11-26 20:36:28.220008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.220024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:88840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.173 [2024-11-26 20:36:28.220038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.220053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.173 [2024-11-26 20:36:28.220067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.220083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.173 [2024-11-26 20:36:28.220097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.220113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.173 [2024-11-26 20:36:28.220127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.220142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:88872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.173 [2024-11-26 20:36:28.220156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.220171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.173 [2024-11-26 20:36:28.220185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.220201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:88888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.173 [2024-11-26 20:36:28.220215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.220244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:88896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.173 [2024-11-26 20:36:28.220259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.220274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.173 [2024-11-26 20:36:28.220297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.220313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.173 [2024-11-26 20:36:28.220328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.220345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.173 [2024-11-26 20:36:28.220359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.220374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.173 [2024-11-26 20:36:28.220389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.173 [2024-11-26 20:36:28.220405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.173 [2024-11-26 20:36:28.220420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.220435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.174 [2024-11-26 20:36:28.220449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.220464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.174 [2024-11-26 20:36:28.220478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.220494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.174 [2024-11-26 20:36:28.220508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.220523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.174 [2024-11-26 20:36:28.220537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.220552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.174 [2024-11-26 20:36:28.220566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.220582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.174 [2024-11-26 20:36:28.220596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.220611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.174 [2024-11-26 20:36:28.220625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.220640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.174 [2024-11-26 20:36:28.220654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.220676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.174 [2024-11-26 20:36:28.220692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.220707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.174 [2024-11-26 20:36:28.220721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.220737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.174 [2024-11-26 20:36:28.220750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.220766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:88904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.174 [2024-11-26 20:36:28.220780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.220795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:88912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.174 [2024-11-26 20:36:28.220810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.220825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:88920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.174 [2024-11-26 20:36:28.220839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.220855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.174 [2024-11-26 20:36:28.220869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.220884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:88936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.174 [2024-11-26 20:36:28.220898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.220914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.174 [2024-11-26 20:36:28.220928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.220943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:88952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.174 [2024-11-26 20:36:28.220957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.220973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:88960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.174 [2024-11-26 20:36:28.220986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.221002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.174 [2024-11-26 20:36:28.221016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.221031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:88976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.174 [2024-11-26 20:36:28.221057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.221074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:88984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.174 [2024-11-26 20:36:28.221088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.221104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.174 [2024-11-26 20:36:28.221118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.221133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:89000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.174 [2024-11-26 20:36:28.221146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.221162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:89008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.174 [2024-11-26 20:36:28.221176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.221192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:89016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.174 [2024-11-26 20:36:28.221206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.221231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:89024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.174 [2024-11-26 20:36:28.221248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.221264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.174 [2024-11-26 20:36:28.221278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.221293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.174 [2024-11-26 20:36:28.221308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.221324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.174 [2024-11-26 20:36:28.221338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.221354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.174 [2024-11-26 20:36:28.221369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.221385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.174 [2024-11-26 20:36:28.221408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.221425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.174 [2024-11-26 20:36:28.221440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.221455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.174 [2024-11-26 20:36:28.221477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.221493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.174 [2024-11-26 20:36:28.221507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.221523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.174 [2024-11-26 20:36:28.221537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.221552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:89552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.174 [2024-11-26 20:36:28.221566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.221581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.174 [2024-11-26 20:36:28.221596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.221611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:89568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.174 [2024-11-26 20:36:28.221625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.174 [2024-11-26 20:36:28.221640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.174 [2024-11-26 20:36:28.221655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:28.221670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.175 [2024-11-26 20:36:28.221684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:28.221700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.175 [2024-11-26 20:36:28.221713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:28.221729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.175 [2024-11-26 20:36:28.221743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:28.221758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.175 [2024-11-26 20:36:28.221772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:28.221788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:89040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.175 [2024-11-26 20:36:28.221802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:28.221818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:89048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.175 [2024-11-26 20:36:28.221832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:28.221854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:89056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.175 [2024-11-26 20:36:28.221868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:28.221884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:89064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.175 [2024-11-26 20:36:28.221903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:28.221919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.175 [2024-11-26 20:36:28.221933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:28.221949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:89080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.175 [2024-11-26 20:36:28.221963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:28.222008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.175 [2024-11-26 20:36:28.222023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.175 [2024-11-26 20:36:28.222035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89088 len:8 PRP1 0x0 PRP2 0x0 00:15:39.175 [2024-11-26 20:36:28.222049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:28.222112] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:15:39.175 [2024-11-26 20:36:28.222132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:15:39.175 [2024-11-26 20:36:28.225995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:15:39.175 [2024-11-26 20:36:28.226049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f8c60 (9): Bad file descriptor 00:15:39.175 [2024-11-26 20:36:28.254417] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:15:39.175 8617.80 IOPS, 33.66 MiB/s [2024-11-26T20:36:39.530Z] 8726.83 IOPS, 34.09 MiB/s [2024-11-26T20:36:39.530Z] 8793.14 IOPS, 34.35 MiB/s [2024-11-26T20:36:39.530Z] 8858.00 IOPS, 34.60 MiB/s [2024-11-26T20:36:39.530Z] 8908.44 IOPS, 34.80 MiB/s [2024-11-26T20:36:39.530Z] [2024-11-26 20:36:32.813569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.175 [2024-11-26 20:36:32.813637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:32.813668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.175 [2024-11-26 20:36:32.813684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:32.813701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.175 [2024-11-26 20:36:32.813715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:32.813732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.175 [2024-11-26 20:36:32.813746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:32.813788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.175 [2024-11-26 20:36:32.813805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:32.813820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.175 [2024-11-26 20:36:32.813834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:32.813850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.175 [2024-11-26 20:36:32.813864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:32.813879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.175 [2024-11-26 20:36:32.813893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:32.813909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.175 [2024-11-26 20:36:32.813923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:32.813939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.175 [2024-11-26 20:36:32.813952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:32.813968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.175 [2024-11-26 20:36:32.813982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:32.813997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.175 [2024-11-26 20:36:32.814011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:32.814027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.175 [2024-11-26 20:36:32.814041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:32.814056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.175 [2024-11-26 20:36:32.814071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:32.814086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.175 [2024-11-26 20:36:32.814099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:32.814115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.175 [2024-11-26 20:36:32.814128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:32.814144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.175 [2024-11-26 20:36:32.814158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:32.814185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.175 [2024-11-26 20:36:32.814201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:32.814217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.175 [2024-11-26 20:36:32.814250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:32.814267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.175 [2024-11-26 20:36:32.814281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:32.814297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.175 [2024-11-26 20:36:32.814313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:32.814329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.175 [2024-11-26 20:36:32.814343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:32.814359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.175 [2024-11-26 20:36:32.814374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.175 [2024-11-26 20:36:32.814389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.175 [2024-11-26 20:36:32.814404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.814420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.176 [2024-11-26 20:36:32.814434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.814450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.176 [2024-11-26 20:36:32.814464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.814480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.176 [2024-11-26 20:36:32.814494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.814510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.176 [2024-11-26 20:36:32.814525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.814540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.176 [2024-11-26 20:36:32.814554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.814570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.176 [2024-11-26 20:36:32.814593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.814609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.176 [2024-11-26 20:36:32.814624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.814639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.176 [2024-11-26 20:36:32.814654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.814669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.176 [2024-11-26 20:36:32.814685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.814701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.176 [2024-11-26 20:36:32.814715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.814731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.176 [2024-11-26 20:36:32.814745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.814761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.176 [2024-11-26 20:36:32.814775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.814791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.176 [2024-11-26 20:36:32.814806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.814821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.176 [2024-11-26 20:36:32.814836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.814851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.176 [2024-11-26 20:36:32.814865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.814881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.176 [2024-11-26 20:36:32.814895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.814910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.176 [2024-11-26 20:36:32.814924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.814940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.176 [2024-11-26 20:36:32.814954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.814976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.176 [2024-11-26 20:36:32.814992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.815007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.176 [2024-11-26 20:36:32.815022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.815037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.176 [2024-11-26 20:36:32.815051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.815067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.176 [2024-11-26 20:36:32.815081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.815097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.176 [2024-11-26 20:36:32.815111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.815127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.176 [2024-11-26 20:36:32.815141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.815157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.176 [2024-11-26 20:36:32.815171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.815188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.176 [2024-11-26 20:36:32.815202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.815228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.176 [2024-11-26 20:36:32.815245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.815261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.176 [2024-11-26 20:36:32.815275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.815291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.176 [2024-11-26 20:36:32.815306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.815322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.176 [2024-11-26 20:36:32.815336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.815352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.176 [2024-11-26 20:36:32.815373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.815390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.176 [2024-11-26 20:36:32.815404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.815420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.176 [2024-11-26 20:36:32.815434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.815450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.176 [2024-11-26 20:36:32.815465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.815480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.176 [2024-11-26 20:36:32.815494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.815510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.176 [2024-11-26 20:36:32.815524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.815540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.176 [2024-11-26 20:36:32.815554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.815570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.176 [2024-11-26 20:36:32.815584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.815599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.176 [2024-11-26 20:36:32.815614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.176 [2024-11-26 20:36:32.815631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.177 [2024-11-26 20:36:32.815645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.815682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.177 [2024-11-26 20:36:32.815707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.815733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.177 [2024-11-26 20:36:32.815758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.815784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.177 [2024-11-26 20:36:32.815801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.815817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.177 [2024-11-26 20:36:32.815841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.815857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.177 [2024-11-26 20:36:32.815872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.815888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.177 [2024-11-26 20:36:32.815902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.815917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.177 [2024-11-26 20:36:32.815932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.815948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.177 [2024-11-26 20:36:32.815962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.815978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.177 [2024-11-26 20:36:32.815992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.816007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.177 [2024-11-26 20:36:32.816021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.816037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.177 [2024-11-26 20:36:32.816051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.816067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.177 [2024-11-26 20:36:32.816081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.816097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.177 [2024-11-26 20:36:32.816111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.816127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.177 [2024-11-26 20:36:32.816141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.816158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.177 [2024-11-26 20:36:32.816172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.816188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.177 [2024-11-26 20:36:32.816203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.816237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.177 [2024-11-26 20:36:32.816255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.816272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.177 [2024-11-26 20:36:32.816287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.816303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.177 [2024-11-26 20:36:32.816317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.816333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.177 [2024-11-26 20:36:32.816347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.816363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.177 [2024-11-26 20:36:32.816377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.816392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.177 [2024-11-26 20:36:32.816407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.816422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.177 [2024-11-26 20:36:32.816442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.816458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.177 [2024-11-26 20:36:32.816472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.816487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.177 [2024-11-26 20:36:32.816501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.816517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.177 [2024-11-26 20:36:32.816531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.816546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.177 [2024-11-26 20:36:32.816561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.816577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.177 [2024-11-26 20:36:32.816591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.816607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.177 [2024-11-26 20:36:32.816628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.177 [2024-11-26 20:36:32.816645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.178 [2024-11-26 20:36:32.816659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.816675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.178 [2024-11-26 20:36:32.816690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.816705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.178 [2024-11-26 20:36:32.816720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.816736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.178 [2024-11-26 20:36:32.816750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.816766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.178 [2024-11-26 20:36:32.816780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.816796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.178 [2024-11-26 20:36:32.816810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.816826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.178 [2024-11-26 20:36:32.816840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.816856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.178 [2024-11-26 20:36:32.816870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.816886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.178 [2024-11-26 20:36:32.816900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.816915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.178 [2024-11-26 20:36:32.816930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.816945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.178 [2024-11-26 20:36:32.816959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.816975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.178 [2024-11-26 20:36:32.816989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.817011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.178 [2024-11-26 20:36:32.817026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.817042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.178 [2024-11-26 20:36:32.817056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.817071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.178 [2024-11-26 20:36:32.817085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.817101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.178 [2024-11-26 20:36:32.817115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.817131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.178 [2024-11-26 20:36:32.817145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.817161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.178 [2024-11-26 20:36:32.817176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.817191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa688b0 is same with the state(6) to be set 00:15:39.178 [2024-11-26 20:36:32.817209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.178 [2024-11-26 20:36:32.817234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.178 [2024-11-26 20:36:32.817249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42984 len:8 PRP1 0x0 PRP2 0x0 00:15:39.178 [2024-11-26 20:36:32.817263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.817278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.178 [2024-11-26 20:36:32.817289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.178 [2024-11-26 20:36:32.817300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43504 len:8 PRP1 0x0 PRP2 0x0 00:15:39.178 [2024-11-26 20:36:32.817314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.817327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.178 [2024-11-26 20:36:32.817338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.178 [2024-11-26 20:36:32.817349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43512 len:8 PRP1 0x0 PRP2 0x0 00:15:39.178 [2024-11-26 20:36:32.817363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.817377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.178 [2024-11-26 20:36:32.817387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.178 [2024-11-26 20:36:32.817398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43520 len:8 PRP1 0x0 PRP2 0x0 00:15:39.178 [2024-11-26 20:36:32.817420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.817435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.178 [2024-11-26 20:36:32.817445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.178 [2024-11-26 20:36:32.817457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43528 len:8 PRP1 0x0 PRP2 0x0 00:15:39.178 [2024-11-26 20:36:32.817470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.817484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.178 [2024-11-26 20:36:32.817495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.178 [2024-11-26 20:36:32.817506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43536 len:8 PRP1 0x0 PRP2 0x0 00:15:39.178 [2024-11-26 20:36:32.817519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.817533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.178 [2024-11-26 20:36:32.817544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.178 [2024-11-26 20:36:32.817555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43544 len:8 PRP1 0x0 PRP2 0x0 00:15:39.178 [2024-11-26 20:36:32.817568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.817582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.178 [2024-11-26 20:36:32.817593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.178 [2024-11-26 20:36:32.817604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43552 len:8 PRP1 0x0 PRP2 0x0 00:15:39.178 [2024-11-26 20:36:32.817618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.817632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.178 [2024-11-26 20:36:32.817643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.178 [2024-11-26 20:36:32.817654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43560 len:8 PRP1 0x0 PRP2 0x0 00:15:39.178 [2024-11-26 20:36:32.817677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.817692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.178 [2024-11-26 20:36:32.817703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.178 [2024-11-26 20:36:32.817714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43568 len:8 PRP1 0x0 PRP2 0x0 00:15:39.178 [2024-11-26 20:36:32.817727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.817741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.178 [2024-11-26 20:36:32.817751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.178 [2024-11-26 20:36:32.817762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43576 len:8 PRP1 0x0 PRP2 0x0 00:15:39.178 [2024-11-26 20:36:32.817776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.178 [2024-11-26 20:36:32.817789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.178 [2024-11-26 20:36:32.817800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.178 [2024-11-26 20:36:32.817817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43584 len:8 PRP1 0x0 PRP2 0x0 00:15:39.178 [2024-11-26 20:36:32.817831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.179 [2024-11-26 20:36:32.817845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.179 [2024-11-26 20:36:32.817856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.179 [2024-11-26 20:36:32.817866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43592 len:8 PRP1 0x0 PRP2 0x0 00:15:39.179 [2024-11-26 20:36:32.817880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.179 [2024-11-26 20:36:32.817894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.179 [2024-11-26 20:36:32.817904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.179 [2024-11-26 20:36:32.817915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43600 len:8 PRP1 0x0 PRP2 0x0 00:15:39.179 [2024-11-26 20:36:32.817929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.179 [2024-11-26 20:36:32.817943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.179 [2024-11-26 20:36:32.817953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.179 [2024-11-26 20:36:32.817964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43608 len:8 PRP1 0x0 PRP2 0x0 00:15:39.179 [2024-11-26 20:36:32.817977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.179 [2024-11-26 20:36:32.817991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.179 [2024-11-26 20:36:32.818007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.179 [2024-11-26 20:36:32.818018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43616 len:8 PRP1 0x0 PRP2 0x0 00:15:39.179 [2024-11-26 20:36:32.818032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.179 [2024-11-26 20:36:32.818046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.179 [2024-11-26 20:36:32.818056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.179 [2024-11-26 20:36:32.818067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43624 len:8 PRP1 0x0 PRP2 0x0 00:15:39.179 [2024-11-26 20:36:32.818085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.179 [2024-11-26 20:36:32.818148] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:15:39.179 [2024-11-26 20:36:32.818207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.179 [2024-11-26 20:36:32.818244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.179 [2024-11-26 20:36:32.818265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.179 [2024-11-26 20:36:32.818280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.179 [2024-11-26 20:36:32.818294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.179 [2024-11-26 20:36:32.818308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.179 [2024-11-26 20:36:32.818333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.179 [2024-11-26 20:36:32.818349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.179 [2024-11-26 20:36:32.818363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:15:39.179 [2024-11-26 20:36:32.818412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f8c60 (9): Bad file descriptor 00:15:39.179 [2024-11-26 20:36:32.822261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:15:39.179 [2024-11-26 20:36:32.846195] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:15:39.179 8905.50 IOPS, 34.79 MiB/s [2024-11-26T20:36:39.534Z] 8932.27 IOPS, 34.89 MiB/s [2024-11-26T20:36:39.534Z] 8952.58 IOPS, 34.97 MiB/s [2024-11-26T20:36:39.534Z] 8982.08 IOPS, 35.09 MiB/s [2024-11-26T20:36:39.534Z] 9007.00 IOPS, 35.18 MiB/s [2024-11-26T20:36:39.534Z] 9030.33 IOPS, 35.27 MiB/s 00:15:39.179 Latency(us) 00:15:39.179 [2024-11-26T20:36:39.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.179 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:39.179 Verification LBA range: start 0x0 length 0x4000 00:15:39.179 NVMe0n1 : 15.01 9030.74 35.28 231.13 0.00 13786.95 636.74 23116.33 00:15:39.179 [2024-11-26T20:36:39.534Z] =================================================================================================================== 00:15:39.179 [2024-11-26T20:36:39.534Z] Total : 9030.74 35.28 231.13 0.00 13786.95 636.74 23116.33 00:15:39.179 Received shutdown signal, test time was about 15.000000 seconds 00:15:39.179 00:15:39.179 Latency(us) 00:15:39.179 [2024-11-26T20:36:39.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.179 [2024-11-26T20:36:39.534Z] =================================================================================================================== 00:15:39.179 [2024-11-26T20:36:39.534Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:39.179 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:39.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:39.179 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:39.179 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:39.179 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75648 00:15:39.179 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:39.179 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75648 /var/tmp/bdevperf.sock 00:15:39.179 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75648 ']' 00:15:39.179 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:39.179 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.179 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:39.179 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.179 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:39.179 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:39.179 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:15:39.179 20:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:39.179 [2024-11-26 20:36:39.205533] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:39.179 20:36:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:39.179 [2024-11-26 20:36:39.505822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:39.438 20:36:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:39.697 NVMe0n1 00:15:39.697 20:36:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:39.955 00:15:39.955 20:36:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:40.213 00:15:40.213 20:36:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:40.213 20:36:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:40.471 20:36:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:40.730 20:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:44.015 20:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:44.015 20:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:44.274 20:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75717 00:15:44.274 20:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:44.274 20:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75717 00:15:45.269 { 00:15:45.269 "results": [ 00:15:45.269 { 00:15:45.269 "job": "NVMe0n1", 00:15:45.269 "core_mask": "0x1", 00:15:45.269 "workload": "verify", 00:15:45.269 "status": "finished", 00:15:45.269 "verify_range": { 00:15:45.269 "start": 0, 00:15:45.269 "length": 16384 00:15:45.269 }, 00:15:45.269 "queue_depth": 128, 00:15:45.269 "io_size": 4096, 00:15:45.269 "runtime": 1.009107, 00:15:45.269 "iops": 7250.965457577839, 00:15:45.269 "mibps": 28.324083818663432, 00:15:45.269 "io_failed": 0, 00:15:45.269 "io_timeout": 0, 00:15:45.269 "avg_latency_us": 17537.621934473893, 00:15:45.269 "min_latency_us": 1370.2981818181818, 00:15:45.269 "max_latency_us": 19184.174545454545 00:15:45.269 } 00:15:45.269 ], 00:15:45.269 "core_count": 1 00:15:45.269 } 00:15:45.269 20:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:45.269 [2024-11-26 20:36:38.622551] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:15:45.269 [2024-11-26 20:36:38.622658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75648 ] 00:15:45.269 [2024-11-26 20:36:38.764400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.269 [2024-11-26 20:36:38.815472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.269 [2024-11-26 20:36:38.868777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:45.269 [2024-11-26 20:36:41.048193] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:45.269 [2024-11-26 20:36:41.048362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.269 [2024-11-26 20:36:41.048389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.269 [2024-11-26 20:36:41.048408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.269 [2024-11-26 20:36:41.048422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.269 [2024-11-26 20:36:41.048437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.270 [2024-11-26 20:36:41.048458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.270 [2024-11-26 20:36:41.048473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.270 [2024-11-26 20:36:41.048487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.270 [2024-11-26 20:36:41.048508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:15:45.270 [2024-11-26 20:36:41.048557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:15:45.270 [2024-11-26 20:36:41.048589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2296c60 (9): Bad file descriptor 00:15:45.270 [2024-11-26 20:36:41.052471] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:15:45.270 Running I/O for 1 seconds... 00:15:45.270 7189.00 IOPS, 28.08 MiB/s 00:15:45.270 Latency(us) 00:15:45.270 [2024-11-26T20:36:45.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.270 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:45.270 Verification LBA range: start 0x0 length 0x4000 00:15:45.270 NVMe0n1 : 1.01 7250.97 28.32 0.00 0.00 17537.62 1370.30 19184.17 00:15:45.270 [2024-11-26T20:36:45.625Z] =================================================================================================================== 00:15:45.270 [2024-11-26T20:36:45.625Z] Total : 7250.97 28.32 0.00 0.00 17537.62 1370.30 19184.17 00:15:45.270 20:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:45.270 20:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:45.837 20:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:46.095 20:36:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:46.095 20:36:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:46.353 20:36:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:46.612 20:36:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:49.970 20:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:49.970 20:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:49.970 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75648 00:15:49.970 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75648 ']' 00:15:49.970 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75648 00:15:49.970 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:15:49.970 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:49.970 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75648 00:15:49.970 killing process with pid 75648 00:15:49.970 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:49.970 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:49.970 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75648' 00:15:49.970 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75648 00:15:49.970 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75648 00:15:49.970 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:50.243 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:50.502 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:50.502 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:50.502 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:50.502 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:50.502 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:15:50.502 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:50.502 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:15:50.502 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:50.502 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:50.502 rmmod nvme_tcp 00:15:50.502 rmmod nvme_fabrics 00:15:50.502 rmmod nvme_keyring 00:15:50.502 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:50.502 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:15:50.502 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:15:50.502 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75402 ']' 00:15:50.502 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75402 00:15:50.502 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75402 ']' 00:15:50.502 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75402 00:15:50.502 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:15:50.502 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:50.502 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75402 00:15:50.502 killing process with pid 75402 00:15:50.502 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:50.502 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:50.502 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75402' 00:15:50.502 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75402 00:15:50.502 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75402 00:15:50.761 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:50.761 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:50.761 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:50.761 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:15:50.761 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:15:50.761 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:50.761 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:15:50.761 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:50.761 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:50.761 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:50.761 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:50.761 20:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:50.761 20:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:50.761 20:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:50.761 20:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:50.761 20:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:50.761 20:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:50.761 20:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:50.761 20:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:50.761 20:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:50.761 20:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:51.020 20:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:51.020 20:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:51.020 20:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.020 20:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:51.020 20:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.020 20:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:15:51.020 00:15:51.020 real 0m32.114s 00:15:51.020 user 2m4.438s 00:15:51.020 sys 0m5.552s 00:15:51.020 20:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.020 20:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:51.020 ************************************ 00:15:51.020 END TEST nvmf_failover 00:15:51.020 ************************************ 00:15:51.020 20:36:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:51.020 20:36:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:51.020 20:36:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:51.020 20:36:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.020 ************************************ 00:15:51.020 START TEST nvmf_host_discovery 00:15:51.020 ************************************ 00:15:51.020 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:51.020 * Looking for test storage... 00:15:51.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:51.020 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:51.020 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:15:51.020 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:51.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.280 --rc genhtml_branch_coverage=1 00:15:51.280 --rc genhtml_function_coverage=1 00:15:51.280 --rc genhtml_legend=1 00:15:51.280 --rc geninfo_all_blocks=1 00:15:51.280 --rc geninfo_unexecuted_blocks=1 00:15:51.280 00:15:51.280 ' 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:51.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.280 --rc genhtml_branch_coverage=1 00:15:51.280 --rc genhtml_function_coverage=1 00:15:51.280 --rc genhtml_legend=1 00:15:51.280 --rc geninfo_all_blocks=1 00:15:51.280 --rc geninfo_unexecuted_blocks=1 00:15:51.280 00:15:51.280 ' 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:51.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.280 --rc genhtml_branch_coverage=1 00:15:51.280 --rc genhtml_function_coverage=1 00:15:51.280 --rc genhtml_legend=1 00:15:51.280 --rc geninfo_all_blocks=1 00:15:51.280 --rc geninfo_unexecuted_blocks=1 00:15:51.280 00:15:51.280 ' 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:51.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.280 --rc genhtml_branch_coverage=1 00:15:51.280 --rc genhtml_function_coverage=1 00:15:51.280 --rc genhtml_legend=1 00:15:51.280 --rc geninfo_all_blocks=1 00:15:51.280 --rc geninfo_unexecuted_blocks=1 00:15:51.280 00:15:51.280 ' 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:51.280 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:51.281 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:51.281 Cannot find device "nvmf_init_br" 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:51.281 Cannot find device "nvmf_init_br2" 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:51.281 Cannot find device "nvmf_tgt_br" 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:51.281 Cannot find device "nvmf_tgt_br2" 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:51.281 Cannot find device "nvmf_init_br" 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:51.281 Cannot find device "nvmf_init_br2" 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:51.281 Cannot find device "nvmf_tgt_br" 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:51.281 Cannot find device "nvmf_tgt_br2" 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:51.281 Cannot find device "nvmf_br" 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:51.281 Cannot find device "nvmf_init_if" 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:51.281 Cannot find device "nvmf_init_if2" 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:51.281 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:51.281 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:51.281 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:51.541 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:51.541 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:15:51.541 00:15:51.541 --- 10.0.0.3 ping statistics --- 00:15:51.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.541 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:51.541 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:51.541 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:15:51.541 00:15:51.541 --- 10.0.0.4 ping statistics --- 00:15:51.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.541 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:51.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:15:51.541 00:15:51.541 --- 10.0.0.1 ping statistics --- 00:15:51.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.541 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:51.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:15:51.541 00:15:51.541 --- 10.0.0.2 ping statistics --- 00:15:51.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.541 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=76051 00:15:51.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 76051 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 76051 ']' 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:51.541 20:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:51.541 [2024-11-26 20:36:51.875022] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:15:51.541 [2024-11-26 20:36:51.875248] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.801 [2024-11-26 20:36:52.016870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.801 [2024-11-26 20:36:52.076729] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.801 [2024-11-26 20:36:52.077020] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.801 [2024-11-26 20:36:52.077057] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.801 [2024-11-26 20:36:52.077067] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.801 [2024-11-26 20:36:52.077075] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.801 [2024-11-26 20:36:52.077496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.801 [2024-11-26 20:36:52.133198] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:52.738 [2024-11-26 20:36:52.900112] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:52.738 [2024-11-26 20:36:52.912215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:52.738 null0 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:52.738 null1 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:52.738 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76079 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76079 /tmp/host.sock 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 76079 ']' 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:52.738 20:36:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:52.738 [2024-11-26 20:36:53.002396] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:15:52.738 [2024-11-26 20:36:53.002660] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76079 ] 00:15:52.997 [2024-11-26 20:36:53.155213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.997 [2024-11-26 20:36:53.216082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.997 [2024-11-26 20:36:53.276416] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:52.997 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:52.997 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:15:52.997 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:52.997 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:52.997 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.997 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:53.257 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.517 [2024-11-26 20:36:53.680400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:53.517 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.518 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.518 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:53.518 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.518 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:53.518 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:15:53.518 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:53.518 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:53.518 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:53.518 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.518 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.518 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.518 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:53.518 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:53.518 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:53.518 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:53.518 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:53.518 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:53.518 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:53.518 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:53.518 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.518 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:53.518 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.518 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:53.777 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.777 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:15:53.777 20:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:15:54.035 [2024-11-26 20:36:54.365211] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:54.035 [2024-11-26 20:36:54.365397] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:54.035 [2024-11-26 20:36:54.365460] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:54.035 [2024-11-26 20:36:54.371271] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:15:54.294 [2024-11-26 20:36:54.425805] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:15:54.294 [2024-11-26 20:36:54.426966] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1263e60:1 started. 00:15:54.294 [2024-11-26 20:36:54.429039] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:54.294 [2024-11-26 20:36:54.429239] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:54.294 [2024-11-26 20:36:54.433792] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1263e60 was disconnected and freed. delete nvme_qpair. 00:15:54.862 20:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:54.862 20:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:54.862 20:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:54.862 20:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:54.862 20:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:54.862 20:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.862 20:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:54.862 20:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.862 20:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:54.862 20:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.862 20:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.862 20:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:54.862 20:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:54.862 20:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:54.862 20:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:54.862 20:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:54.862 20:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:15:54.862 20:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:54.862 20:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:54.862 20:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:54.862 20:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.862 20:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:54.862 20:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.862 20:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:54.862 [2024-11-26 20:36:55.147619] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x12722f0:1 started. 00:15:54.862 [2024-11-26 20:36:55.154250] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x12722f0 was disconnected and freed. delete nvme_qpair. 00:15:54.862 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.863 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:54.863 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:54.863 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:15:54.863 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:54.863 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:54.863 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:54.863 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:54.863 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:54.863 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:54.863 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:54.863 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:54.863 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:54.863 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.863 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.863 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:55.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:55.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:55.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:55.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:15:55.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.121 [2024-11-26 20:36:55.249715] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:55.121 [2024-11-26 20:36:55.250402] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:15:55.121 [2024-11-26 20:36:55.250429] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:55.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:55.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:55.121 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:55.122 [2024-11-26 20:36:55.256413] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.122 [2024-11-26 20:36:55.318776] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:15:55.122 [2024-11-26 20:36:55.318833] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:55.122 [2024-11-26 20:36:55.318845] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:55.122 [2024-11-26 20:36:55.318851] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:55.122 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.382 [2024-11-26 20:36:55.494163] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:15:55.382 [2024-11-26 20:36:55.494204] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:55.382 [2024-11-26 20:36:55.495971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.382 [2024-11-26 20:36:55.496007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.382 [2024-11-26 20:36:55.496021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.382 [2024-11-26 20:36:55.496032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.382 [2024-11-26 20:36:55.496043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.382 [2024-11-26 20:36:55.496052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.382 [2024-11-26 20:36:55.496062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.382 [2024-11-26 20:36:55.496071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.382 [2024-11-26 20:36:55.496080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1240240 is same with the state(6) to be set 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:55.382 [2024-11-26 20:36:55.500183] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:15:55.382 [2024-11-26 20:36:55.500234] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:55.382 [2024-11-26 20:36:55.500307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1240240 (9): Bad file descriptor 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:55.382 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:55.383 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:55.641 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:55.642 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.642 20:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.576 [2024-11-26 20:36:56.896022] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:56.576 [2024-11-26 20:36:56.896062] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:56.576 [2024-11-26 20:36:56.896084] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:56.576 [2024-11-26 20:36:56.902059] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:15:56.835 [2024-11-26 20:36:56.960415] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:15:56.835 [2024-11-26 20:36:56.961261] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x124bc30:1 started. 00:15:56.835 [2024-11-26 20:36:56.963550] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:56.835 [2024-11-26 20:36:56.963599] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:56.835 [2024-11-26 20:36:56.964938] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1 20:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.835 24bc30 was disconnected and freed. delete nvme_qpair. 00:15:56.835 20:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:56.835 20:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:15:56.835 20:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:56.835 20:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:56.835 20:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:56.835 20:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:56.835 20:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:56.835 20:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:56.835 20:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.835 20:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.835 request: 00:15:56.835 { 00:15:56.835 "name": "nvme", 00:15:56.835 "trtype": "tcp", 00:15:56.835 "traddr": "10.0.0.3", 00:15:56.835 "adrfam": "ipv4", 00:15:56.835 "trsvcid": "8009", 00:15:56.835 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:56.835 "wait_for_attach": true, 00:15:56.835 "method": "bdev_nvme_start_discovery", 00:15:56.835 "req_id": 1 00:15:56.835 } 00:15:56.835 Got JSON-RPC error response 00:15:56.835 response: 00:15:56.835 { 00:15:56.835 "code": -17, 00:15:56.835 "message": "File exists" 00:15:56.836 } 00:15:56.836 20:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:56.836 20:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:15:56.836 20:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:56.836 20:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:56.836 20:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:56.836 20:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:15:56.836 20:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:56.836 20:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:56.836 20:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.836 20:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.836 20:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:56.836 20:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:56.836 20:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.836 request: 00:15:56.836 { 00:15:56.836 "name": "nvme_second", 00:15:56.836 "trtype": "tcp", 00:15:56.836 "traddr": "10.0.0.3", 00:15:56.836 "adrfam": "ipv4", 00:15:56.836 "trsvcid": "8009", 00:15:56.836 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:56.836 "wait_for_attach": true, 00:15:56.836 "method": "bdev_nvme_start_discovery", 00:15:56.836 "req_id": 1 00:15:56.836 } 00:15:56.836 Got JSON-RPC error response 00:15:56.836 response: 00:15:56.836 { 00:15:56.836 "code": -17, 00:15:56.836 "message": "File exists" 00:15:56.836 } 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:56.836 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.095 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:57.095 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:57.095 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:15:57.095 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:57.095 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:57.095 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:57.095 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:57.095 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:57.095 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:57.095 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.095 20:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.031 [2024-11-26 20:36:58.207965] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:58.031 [2024-11-26 20:36:58.208041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123ff40 with addr=10.0.0.3, port=8010 00:15:58.031 [2024-11-26 20:36:58.208067] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:58.031 [2024-11-26 20:36:58.208085] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:58.031 [2024-11-26 20:36:58.208095] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:15:58.967 [2024-11-26 20:36:59.207954] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:58.967 [2024-11-26 20:36:59.208022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123ff40 with addr=10.0.0.3, port=8010 00:15:58.967 [2024-11-26 20:36:59.208048] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:58.967 [2024-11-26 20:36:59.208060] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:58.967 [2024-11-26 20:36:59.208069] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:15:59.902 [2024-11-26 20:37:00.207815] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:15:59.902 request: 00:15:59.902 { 00:15:59.902 "name": "nvme_second", 00:15:59.902 "trtype": "tcp", 00:15:59.902 "traddr": "10.0.0.3", 00:15:59.902 "adrfam": "ipv4", 00:15:59.902 "trsvcid": "8010", 00:15:59.902 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:59.902 "wait_for_attach": false, 00:15:59.902 "attach_timeout_ms": 3000, 00:15:59.902 "method": "bdev_nvme_start_discovery", 00:15:59.902 "req_id": 1 00:15:59.902 } 00:15:59.902 Got JSON-RPC error response 00:15:59.902 response: 00:15:59.902 { 00:15:59.902 "code": -110, 00:15:59.902 "message": "Connection timed out" 00:15:59.902 } 00:15:59.902 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:59.902 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:15:59.902 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:59.902 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:59.902 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:59.902 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:15:59.902 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:59.902 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.902 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:59.902 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.902 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:59.902 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:59.902 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.161 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:00.161 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:00.161 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76079 00:16:00.161 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:00.161 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:00.161 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:16:00.161 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:00.161 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:16:00.161 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:00.161 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:00.161 rmmod nvme_tcp 00:16:00.161 rmmod nvme_fabrics 00:16:00.161 rmmod nvme_keyring 00:16:00.161 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:00.161 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:16:00.161 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:16:00.161 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 76051 ']' 00:16:00.161 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 76051 00:16:00.161 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 76051 ']' 00:16:00.161 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 76051 00:16:00.161 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:16:00.161 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.161 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76051 00:16:00.161 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:00.161 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:00.161 killing process with pid 76051 00:16:00.161 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76051' 00:16:00.161 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 76051 00:16:00.161 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 76051 00:16:00.420 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:00.420 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:00.420 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:00.420 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:16:00.420 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:16:00.420 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:16:00.420 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:00.420 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:00.420 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:00.420 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:00.420 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:00.420 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:00.420 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:00.421 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:00.421 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:00.421 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:00.421 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:00.421 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:00.421 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:00.421 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:00.679 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:00.680 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:00.680 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:00.680 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.680 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:00.680 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.680 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:16:00.680 00:16:00.680 real 0m9.639s 00:16:00.680 user 0m17.723s 00:16:00.680 sys 0m2.021s 00:16:00.680 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:00.680 ************************************ 00:16:00.680 END TEST nvmf_host_discovery 00:16:00.680 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.680 ************************************ 00:16:00.680 20:37:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:00.680 20:37:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:00.680 20:37:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:00.680 20:37:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.680 ************************************ 00:16:00.680 START TEST nvmf_host_multipath_status 00:16:00.680 ************************************ 00:16:00.680 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:00.680 * Looking for test storage... 00:16:00.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:00.680 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:00.680 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:16:00.680 20:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:00.939 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:00.939 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:00.939 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:00.939 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:00.939 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:16:00.939 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:16:00.939 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:16:00.939 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:16:00.939 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:16:00.939 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:16:00.939 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:16:00.939 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:00.939 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:16:00.939 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:16:00.939 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:00.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.940 --rc genhtml_branch_coverage=1 00:16:00.940 --rc genhtml_function_coverage=1 00:16:00.940 --rc genhtml_legend=1 00:16:00.940 --rc geninfo_all_blocks=1 00:16:00.940 --rc geninfo_unexecuted_blocks=1 00:16:00.940 00:16:00.940 ' 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:00.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.940 --rc genhtml_branch_coverage=1 00:16:00.940 --rc genhtml_function_coverage=1 00:16:00.940 --rc genhtml_legend=1 00:16:00.940 --rc geninfo_all_blocks=1 00:16:00.940 --rc geninfo_unexecuted_blocks=1 00:16:00.940 00:16:00.940 ' 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:00.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.940 --rc genhtml_branch_coverage=1 00:16:00.940 --rc genhtml_function_coverage=1 00:16:00.940 --rc genhtml_legend=1 00:16:00.940 --rc geninfo_all_blocks=1 00:16:00.940 --rc geninfo_unexecuted_blocks=1 00:16:00.940 00:16:00.940 ' 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:00.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.940 --rc genhtml_branch_coverage=1 00:16:00.940 --rc genhtml_function_coverage=1 00:16:00.940 --rc genhtml_legend=1 00:16:00.940 --rc geninfo_all_blocks=1 00:16:00.940 --rc geninfo_unexecuted_blocks=1 00:16:00.940 00:16:00.940 ' 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:00.940 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:00.940 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:00.941 Cannot find device "nvmf_init_br" 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:00.941 Cannot find device "nvmf_init_br2" 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:00.941 Cannot find device "nvmf_tgt_br" 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:00.941 Cannot find device "nvmf_tgt_br2" 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:00.941 Cannot find device "nvmf_init_br" 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:00.941 Cannot find device "nvmf_init_br2" 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:00.941 Cannot find device "nvmf_tgt_br" 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:00.941 Cannot find device "nvmf_tgt_br2" 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:00.941 Cannot find device "nvmf_br" 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:00.941 Cannot find device "nvmf_init_if" 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:00.941 Cannot find device "nvmf_init_if2" 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:00.941 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:00.941 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:00.941 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:01.199 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:01.199 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:01.199 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:01.199 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:01.199 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:01.199 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:01.199 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:01.199 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:01.199 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:01.199 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:01.199 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:01.199 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:01.199 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:01.199 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:01.199 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:01.199 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:01.200 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:01.200 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:16:01.200 00:16:01.200 --- 10.0.0.3 ping statistics --- 00:16:01.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.200 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:01.200 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:01.200 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:16:01.200 00:16:01.200 --- 10.0.0.4 ping statistics --- 00:16:01.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.200 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:01.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:16:01.200 00:16:01.200 --- 10.0.0.1 ping statistics --- 00:16:01.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.200 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:01.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:16:01.200 00:16:01.200 --- 10.0.0.2 ping statistics --- 00:16:01.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.200 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76585 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76585 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76585 ']' 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:01.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:01.200 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:01.457 [2024-11-26 20:37:01.611558] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:16:01.457 [2024-11-26 20:37:01.611671] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.458 [2024-11-26 20:37:01.755867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:01.715 [2024-11-26 20:37:01.814672] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:01.715 [2024-11-26 20:37:01.814716] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:01.715 [2024-11-26 20:37:01.814727] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:01.715 [2024-11-26 20:37:01.814736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:01.715 [2024-11-26 20:37:01.814744] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:01.715 [2024-11-26 20:37:01.815928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.715 [2024-11-26 20:37:01.815919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.715 [2024-11-26 20:37:01.871849] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:01.715 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.715 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:16:01.715 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:01.715 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:01.715 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:01.715 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.715 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76585 00:16:01.715 20:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:01.973 [2024-11-26 20:37:02.261308] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:01.973 20:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:02.231 Malloc0 00:16:02.231 20:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:02.489 20:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:02.746 20:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:03.003 [2024-11-26 20:37:03.313242] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:03.003 20:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:03.261 [2024-11-26 20:37:03.569388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:03.261 20:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:03.261 20:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76634 00:16:03.261 20:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:03.261 20:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76634 /var/tmp/bdevperf.sock 00:16:03.261 20:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76634 ']' 00:16:03.261 20:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:03.261 20:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:03.261 20:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:03.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:03.261 20:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:03.261 20:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:04.635 20:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:04.635 20:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:16:04.635 20:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:04.635 20:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:04.893 Nvme0n1 00:16:04.893 20:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:05.151 Nvme0n1 00:16:05.409 20:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:05.409 20:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:07.310 20:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:07.310 20:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:07.568 20:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:07.827 20:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:08.764 20:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:08.764 20:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:08.764 20:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.764 20:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:09.025 20:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:09.025 20:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:09.025 20:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:09.025 20:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:09.284 20:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:09.543 20:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:09.543 20:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:09.543 20:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:09.802 20:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:09.802 20:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:09.802 20:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:09.802 20:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:10.060 20:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:10.060 20:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:10.060 20:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.060 20:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:10.319 20:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:10.319 20:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:10.319 20:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:10.319 20:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.577 20:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:10.577 20:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:10.577 20:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:10.836 20:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:11.095 20:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:12.032 20:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:12.032 20:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:12.032 20:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:12.032 20:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:12.291 20:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:12.291 20:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:12.291 20:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:12.291 20:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:12.550 20:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:12.550 20:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:12.550 20:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:12.550 20:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:13.117 20:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:13.117 20:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:13.117 20:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.117 20:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:13.117 20:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:13.117 20:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:13.117 20:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:13.117 20:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.376 20:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:13.376 20:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:13.376 20:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.376 20:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:13.635 20:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:13.635 20:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:13.635 20:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:14.203 20:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:14.203 20:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:15.581 20:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:15.581 20:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:15.581 20:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:15.581 20:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:15.581 20:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:15.581 20:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:15.581 20:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:15.581 20:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:15.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:15.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:15.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:15.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.406 20:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:16.406 20:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:16.406 20:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.406 20:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:16.406 20:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:16.406 20:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:16.406 20:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.406 20:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:16.973 20:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:16.973 20:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:16.973 20:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.973 20:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:16.973 20:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:16.973 20:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:16.973 20:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:17.539 20:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:17.808 20:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:18.743 20:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:18.743 20:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:18.743 20:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:18.743 20:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.000 20:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:19.000 20:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:19.000 20:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.000 20:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:19.259 20:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:19.259 20:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:19.259 20:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:19.259 20:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.517 20:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:19.517 20:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:19.517 20:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.517 20:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:19.775 20:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:19.775 20:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:19.775 20:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.775 20:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:20.091 20:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.091 20:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:20.091 20:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.091 20:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:20.364 20:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:20.364 20:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:20.364 20:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:20.932 20:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:21.190 20:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:22.125 20:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:22.125 20:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:22.125 20:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.125 20:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:22.383 20:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:22.383 20:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:22.383 20:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:22.383 20:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.951 20:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:22.951 20:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:22.951 20:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.951 20:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:22.951 20:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.951 20:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:22.951 20:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.951 20:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:23.211 20:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:23.211 20:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:23.211 20:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.211 20:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:23.779 20:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:23.779 20:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:23.779 20:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.779 20:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:23.779 20:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:23.779 20:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:23.779 20:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:24.038 20:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:24.299 20:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:25.699 20:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:25.699 20:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:25.699 20:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.700 20:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:25.700 20:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:25.700 20:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:25.700 20:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.700 20:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:25.959 20:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:25.959 20:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:25.959 20:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.959 20:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:26.218 20:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:26.218 20:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:26.218 20:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:26.218 20:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.477 20:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:26.477 20:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:26.477 20:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:26.477 20:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.044 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:27.044 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:27.044 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.044 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:27.044 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.044 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:27.302 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:27.302 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:27.869 20:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:27.869 20:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:29.245 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:29.245 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:29.245 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:29.245 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:29.245 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:29.245 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:29.245 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:29.245 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:29.503 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:29.503 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:29.503 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:29.503 20:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:29.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:29.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:29.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:29.792 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.359 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:30.359 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:30.359 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.359 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:30.360 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:30.360 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:30.360 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.360 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:30.926 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:30.926 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:30.926 20:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:31.185 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:31.444 20:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:32.382 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:32.382 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:32.382 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.382 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:32.641 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:32.641 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:32.641 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.641 20:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:32.899 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:32.899 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:32.899 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:32.899 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.158 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.158 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:33.158 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.158 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:33.726 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.726 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:33.726 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.726 20:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:33.726 20:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.726 20:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:33.726 20:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.726 20:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:33.985 20:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.985 20:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:33.985 20:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:34.551 20:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:34.809 20:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:35.747 20:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:35.747 20:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:35.747 20:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.747 20:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:36.006 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.006 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:36.006 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:36.006 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.265 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.265 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:36.265 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.265 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:36.525 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.525 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:36.525 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.525 20:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:36.786 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.786 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:36.786 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.786 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:37.357 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.357 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:37.357 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:37.357 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.617 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.617 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:37.617 20:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:37.876 20:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:38.135 20:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:39.518 20:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:39.518 20:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:39.518 20:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.518 20:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:39.518 20:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.518 20:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:39.518 20:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.518 20:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:39.776 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:39.776 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:39.776 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.776 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:40.035 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.035 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:40.035 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.035 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:40.293 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.293 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:40.293 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.293 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:40.860 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.860 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:40.860 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.860 20:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:41.167 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:41.167 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76634 00:16:41.167 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76634 ']' 00:16:41.167 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76634 00:16:41.167 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:16:41.167 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:41.167 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76634 00:16:41.167 killing process with pid 76634 00:16:41.167 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:41.167 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:41.167 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76634' 00:16:41.167 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76634 00:16:41.167 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76634 00:16:41.167 { 00:16:41.167 "results": [ 00:16:41.167 { 00:16:41.167 "job": "Nvme0n1", 00:16:41.167 "core_mask": "0x4", 00:16:41.167 "workload": "verify", 00:16:41.167 "status": "terminated", 00:16:41.167 "verify_range": { 00:16:41.167 "start": 0, 00:16:41.167 "length": 16384 00:16:41.167 }, 00:16:41.167 "queue_depth": 128, 00:16:41.167 "io_size": 4096, 00:16:41.167 "runtime": 35.604761, 00:16:41.167 "iops": 8437.972663262646, 00:16:41.167 "mibps": 32.96083071586971, 00:16:41.167 "io_failed": 0, 00:16:41.167 "io_timeout": 0, 00:16:41.167 "avg_latency_us": 15137.864792807448, 00:16:41.167 "min_latency_us": 770.7927272727272, 00:16:41.167 "max_latency_us": 4087539.898181818 00:16:41.167 } 00:16:41.167 ], 00:16:41.167 "core_count": 1 00:16:41.167 } 00:16:41.167 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76634 00:16:41.167 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:41.167 [2024-11-26 20:37:03.635677] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:16:41.167 [2024-11-26 20:37:03.635779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76634 ] 00:16:41.167 [2024-11-26 20:37:03.785029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.167 [2024-11-26 20:37:03.847511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.167 [2024-11-26 20:37:03.903517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:41.167 Running I/O for 90 seconds... 00:16:41.167 6805.00 IOPS, 26.58 MiB/s [2024-11-26T20:37:41.522Z] 6922.00 IOPS, 27.04 MiB/s [2024-11-26T20:37:41.522Z] 6919.00 IOPS, 27.03 MiB/s [2024-11-26T20:37:41.522Z] 6885.25 IOPS, 26.90 MiB/s [2024-11-26T20:37:41.522Z] 7030.40 IOPS, 27.46 MiB/s [2024-11-26T20:37:41.522Z] 7385.17 IOPS, 28.85 MiB/s [2024-11-26T20:37:41.522Z] 7646.71 IOPS, 29.87 MiB/s [2024-11-26T20:37:41.522Z] 7849.62 IOPS, 30.66 MiB/s [2024-11-26T20:37:41.522Z] 8003.89 IOPS, 31.27 MiB/s [2024-11-26T20:37:41.522Z] 8147.00 IOPS, 31.82 MiB/s [2024-11-26T20:37:41.522Z] 8258.45 IOPS, 32.26 MiB/s [2024-11-26T20:37:41.522Z] 8352.67 IOPS, 32.63 MiB/s [2024-11-26T20:37:41.522Z] 8430.15 IOPS, 32.93 MiB/s [2024-11-26T20:37:41.522Z] 8492.79 IOPS, 33.17 MiB/s [2024-11-26T20:37:41.522Z] 8549.00 IOPS, 33.39 MiB/s [2024-11-26T20:37:41.522Z] [2024-11-26 20:37:20.980737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.167 [2024-11-26 20:37:20.980814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:41.167 [2024-11-26 20:37:20.980851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.167 [2024-11-26 20:37:20.980869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:41.167 [2024-11-26 20:37:20.980892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.167 [2024-11-26 20:37:20.980909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:41.167 [2024-11-26 20:37:20.980930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.167 [2024-11-26 20:37:20.980946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.167 [2024-11-26 20:37:20.980967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.167 [2024-11-26 20:37:20.980983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:41.167 [2024-11-26 20:37:20.981004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.167 [2024-11-26 20:37:20.981020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:41.167 [2024-11-26 20:37:20.981042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.167 [2024-11-26 20:37:20.981057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:41.167 [2024-11-26 20:37:20.981078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.167 [2024-11-26 20:37:20.981094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:41.167 [2024-11-26 20:37:20.981115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.167 [2024-11-26 20:37:20.981131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:41.167 [2024-11-26 20:37:20.981182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.167 [2024-11-26 20:37:20.981199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:41.167 [2024-11-26 20:37:20.981235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.167 [2024-11-26 20:37:20.981254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:41.167 [2024-11-26 20:37:20.981276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.167 [2024-11-26 20:37:20.981292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:41.167 [2024-11-26 20:37:20.981313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.167 [2024-11-26 20:37:20.981329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:41.167 [2024-11-26 20:37:20.981350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.167 [2024-11-26 20:37:20.981365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:41.167 [2024-11-26 20:37:20.981386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.167 [2024-11-26 20:37:20.981401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:41.167 [2024-11-26 20:37:20.981423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.168 [2024-11-26 20:37:20.981438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.981459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.168 [2024-11-26 20:37:20.981474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.981497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.168 [2024-11-26 20:37:20.981513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.981535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.168 [2024-11-26 20:37:20.981550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.981571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.168 [2024-11-26 20:37:20.981587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.981608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.168 [2024-11-26 20:37:20.981632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.981665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.168 [2024-11-26 20:37:20.981682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.981703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.168 [2024-11-26 20:37:20.981720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.981742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.168 [2024-11-26 20:37:20.981758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.981780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.168 [2024-11-26 20:37:20.981795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.981817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.168 [2024-11-26 20:37:20.981833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.981854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.168 [2024-11-26 20:37:20.981870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.981892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.168 [2024-11-26 20:37:20.981907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.981928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.168 [2024-11-26 20:37:20.981944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.981965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.168 [2024-11-26 20:37:20.981981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.982003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.168 [2024-11-26 20:37:20.982018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.982041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.168 [2024-11-26 20:37:20.982056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.982082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.168 [2024-11-26 20:37:20.982100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.982122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.168 [2024-11-26 20:37:20.982146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.982168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.168 [2024-11-26 20:37:20.982184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.982205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.168 [2024-11-26 20:37:20.982233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.982258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.168 [2024-11-26 20:37:20.982273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.982295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.168 [2024-11-26 20:37:20.982310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.982331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.168 [2024-11-26 20:37:20.982347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.982368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.168 [2024-11-26 20:37:20.982384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.982405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.168 [2024-11-26 20:37:20.982420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.982442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.168 [2024-11-26 20:37:20.982458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.982479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.168 [2024-11-26 20:37:20.982494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.982516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.168 [2024-11-26 20:37:20.982531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.982553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.168 [2024-11-26 20:37:20.982569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.982590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.168 [2024-11-26 20:37:20.982614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.982637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.168 [2024-11-26 20:37:20.982653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.982675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.168 [2024-11-26 20:37:20.982690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.982712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.168 [2024-11-26 20:37:20.982728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.982749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.168 [2024-11-26 20:37:20.982765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.982786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.168 [2024-11-26 20:37:20.982801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.982823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.168 [2024-11-26 20:37:20.982838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.982860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.168 [2024-11-26 20:37:20.982875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.982896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.168 [2024-11-26 20:37:20.982912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:41.168 [2024-11-26 20:37:20.982933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.169 [2024-11-26 20:37:20.982948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.982970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.169 [2024-11-26 20:37:20.982986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.983011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.169 [2024-11-26 20:37:20.983029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.983051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.169 [2024-11-26 20:37:20.983066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.983095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.169 [2024-11-26 20:37:20.983111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.983133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.169 [2024-11-26 20:37:20.983148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.983169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.169 [2024-11-26 20:37:20.983185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.983206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.169 [2024-11-26 20:37:20.983232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.983257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.169 [2024-11-26 20:37:20.983272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.983294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.169 [2024-11-26 20:37:20.983310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.983331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.169 [2024-11-26 20:37:20.983347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.983368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.169 [2024-11-26 20:37:20.983383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.983405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.169 [2024-11-26 20:37:20.983421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.983442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.169 [2024-11-26 20:37:20.983458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.983479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.169 [2024-11-26 20:37:20.983494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.983522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.169 [2024-11-26 20:37:20.983537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.983576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.169 [2024-11-26 20:37:20.983593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.983616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.169 [2024-11-26 20:37:20.983658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.983686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.169 [2024-11-26 20:37:20.983702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.983723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.169 [2024-11-26 20:37:20.983739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.983761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.169 [2024-11-26 20:37:20.983776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.983797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.169 [2024-11-26 20:37:20.983813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.983834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.169 [2024-11-26 20:37:20.983849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.983870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.169 [2024-11-26 20:37:20.983885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.983907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.169 [2024-11-26 20:37:20.983922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.983944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.169 [2024-11-26 20:37:20.983960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.983982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.169 [2024-11-26 20:37:20.983997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.984019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.169 [2024-11-26 20:37:20.984034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.984055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.169 [2024-11-26 20:37:20.984082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.984105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.169 [2024-11-26 20:37:20.984120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.984141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.169 [2024-11-26 20:37:20.984157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.984178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.169 [2024-11-26 20:37:20.984193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.984215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.169 [2024-11-26 20:37:20.984243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.984267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.169 [2024-11-26 20:37:20.984283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.984304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.169 [2024-11-26 20:37:20.984320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.984342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.169 [2024-11-26 20:37:20.984357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.984378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.169 [2024-11-26 20:37:20.984394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.984415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.169 [2024-11-26 20:37:20.984430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.984452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.169 [2024-11-26 20:37:20.984467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:41.169 [2024-11-26 20:37:20.984488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.170 [2024-11-26 20:37:20.984504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.984525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.170 [2024-11-26 20:37:20.984555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.984587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.170 [2024-11-26 20:37:20.984604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.984629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.170 [2024-11-26 20:37:20.984646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.984668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.170 [2024-11-26 20:37:20.984684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.984705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.170 [2024-11-26 20:37:20.984720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.984742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.170 [2024-11-26 20:37:20.984757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.984779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.170 [2024-11-26 20:37:20.984794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.984815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.170 [2024-11-26 20:37:20.984830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.984852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.170 [2024-11-26 20:37:20.984872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.984894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.170 [2024-11-26 20:37:20.984909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.984930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.170 [2024-11-26 20:37:20.984946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.984967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.170 [2024-11-26 20:37:20.984989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.985010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.170 [2024-11-26 20:37:20.985026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.985056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.170 [2024-11-26 20:37:20.985072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.985094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.170 [2024-11-26 20:37:20.985109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.985131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.170 [2024-11-26 20:37:20.985146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.985168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.170 [2024-11-26 20:37:20.985183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.985209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.170 [2024-11-26 20:37:20.985239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.985263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.170 [2024-11-26 20:37:20.985278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.985300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.170 [2024-11-26 20:37:20.985316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.985337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.170 [2024-11-26 20:37:20.985352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.985374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.170 [2024-11-26 20:37:20.985389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.985411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.170 [2024-11-26 20:37:20.985426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.985448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.170 [2024-11-26 20:37:20.985463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.985485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.170 [2024-11-26 20:37:20.985500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.986824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.170 [2024-11-26 20:37:20.986855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.986884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.170 [2024-11-26 20:37:20.986902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.986924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.170 [2024-11-26 20:37:20.986943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.986964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.170 [2024-11-26 20:37:20.986980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.987002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.170 [2024-11-26 20:37:20.987018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.987040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.170 [2024-11-26 20:37:20.987055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.987077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.170 [2024-11-26 20:37:20.987092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.987114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.170 [2024-11-26 20:37:20.987129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.987171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.170 [2024-11-26 20:37:20.987190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.987213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.170 [2024-11-26 20:37:20.987254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.987278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.170 [2024-11-26 20:37:20.987294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.987316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.170 [2024-11-26 20:37:20.987331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:41.170 [2024-11-26 20:37:20.987353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.171 [2024-11-26 20:37:20.987381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.987405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.171 [2024-11-26 20:37:20.987421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.987442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.171 [2024-11-26 20:37:20.987457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.987479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.171 [2024-11-26 20:37:20.987495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.987527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.171 [2024-11-26 20:37:20.987544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.987566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.171 [2024-11-26 20:37:20.987585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.987606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.171 [2024-11-26 20:37:20.987632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.987659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.171 [2024-11-26 20:37:20.987675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.987697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.171 [2024-11-26 20:37:20.987712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.987734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.171 [2024-11-26 20:37:20.987750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.987772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.171 [2024-11-26 20:37:20.987787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.987820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.171 [2024-11-26 20:37:20.987836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.988163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.171 [2024-11-26 20:37:20.988190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.988241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.171 [2024-11-26 20:37:20.988261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.988283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.171 [2024-11-26 20:37:20.988299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.988321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.171 [2024-11-26 20:37:20.988336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.988357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.171 [2024-11-26 20:37:20.988373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.988394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.171 [2024-11-26 20:37:20.988409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.988431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.171 [2024-11-26 20:37:20.988446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.988467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.171 [2024-11-26 20:37:20.988483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.988505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.171 [2024-11-26 20:37:20.988521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.988542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.171 [2024-11-26 20:37:20.988557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.988584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.171 [2024-11-26 20:37:20.988599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.988621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.171 [2024-11-26 20:37:20.988636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.988657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.171 [2024-11-26 20:37:20.988672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.988703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.171 [2024-11-26 20:37:20.988719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:41.171 [2024-11-26 20:37:20.988741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.171 [2024-11-26 20:37:20.988756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.988778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.988793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.988825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.988842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.988864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.988879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.988900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.988916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.988937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.988953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.988974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.988989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.989020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.989035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.989056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.989072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.989099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.989115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.989151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.989171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.989193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.989230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.989257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.989273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.989294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.989310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.989332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.172 [2024-11-26 20:37:20.989347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.989369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.172 [2024-11-26 20:37:20.989384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.989406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.172 [2024-11-26 20:37:20.989421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.989442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.172 [2024-11-26 20:37:20.989458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.989484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.172 [2024-11-26 20:37:20.989500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.989522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.172 [2024-11-26 20:37:20.989537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.989558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.172 [2024-11-26 20:37:20.989577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.989599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.172 [2024-11-26 20:37:20.989614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.989638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.989653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.989674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.989700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.989724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.989740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.989762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.989778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.990243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.990271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.990299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.990316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.990338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.990353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.990375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.990390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.990412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.990427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.990448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.990463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.990485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.990500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.990521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.990537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.990564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.990579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.990601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.990616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.990657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.990673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.990695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.990710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.990732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.990747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.172 [2024-11-26 20:37:20.990769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.172 [2024-11-26 20:37:20.990784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:20.990805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.173 [2024-11-26 20:37:20.990821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:20.990843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.173 [2024-11-26 20:37:20.990858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:20.990880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.173 [2024-11-26 20:37:20.990901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:20.990923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.173 [2024-11-26 20:37:20.990939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:20.990960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.173 [2024-11-26 20:37:20.990976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:20.990997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.173 [2024-11-26 20:37:21.000347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:21.000433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.173 [2024-11-26 20:37:21.000454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:21.000478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.173 [2024-11-26 20:37:21.000495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:21.000531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.173 [2024-11-26 20:37:21.000548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:21.000570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.173 [2024-11-26 20:37:21.000586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:21.000608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.173 [2024-11-26 20:37:21.000623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:21.000644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.173 [2024-11-26 20:37:21.000659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:21.000681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.173 [2024-11-26 20:37:21.000696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:21.000717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.173 [2024-11-26 20:37:21.000732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:21.000753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.173 [2024-11-26 20:37:21.000768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:21.000789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.173 [2024-11-26 20:37:21.000804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:21.000825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.173 [2024-11-26 20:37:21.000840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:21.000861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.173 [2024-11-26 20:37:21.000875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:21.000897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.173 [2024-11-26 20:37:21.000912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:21.000933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.173 [2024-11-26 20:37:21.000948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:21.000969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.173 [2024-11-26 20:37:21.000991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:21.001014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.173 [2024-11-26 20:37:21.001029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:21.001050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.173 [2024-11-26 20:37:21.001064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:21.001085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.173 [2024-11-26 20:37:21.001100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:21.001121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.173 [2024-11-26 20:37:21.001136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:21.001157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.173 [2024-11-26 20:37:21.001173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:21.001201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.173 [2024-11-26 20:37:21.001218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:21.001262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.173 [2024-11-26 20:37:21.001282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:21.001304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.173 [2024-11-26 20:37:21.001319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:41.173 [2024-11-26 20:37:21.001340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.173 [2024-11-26 20:37:21.001355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.001376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.174 [2024-11-26 20:37:21.001391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.001412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.174 [2024-11-26 20:37:21.001427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.001448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.174 [2024-11-26 20:37:21.001472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.001495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.174 [2024-11-26 20:37:21.001510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.001531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.174 [2024-11-26 20:37:21.001546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.001568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.174 [2024-11-26 20:37:21.001583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.001604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.174 [2024-11-26 20:37:21.001619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.001640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.174 [2024-11-26 20:37:21.001654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.001675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.174 [2024-11-26 20:37:21.001690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.001711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.174 [2024-11-26 20:37:21.001726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.001747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.174 [2024-11-26 20:37:21.001761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.001782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.174 [2024-11-26 20:37:21.001797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.001819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.174 [2024-11-26 20:37:21.001833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.001854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.174 [2024-11-26 20:37:21.001869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.001890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.174 [2024-11-26 20:37:21.001904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.001933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.174 [2024-11-26 20:37:21.001949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.001970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.174 [2024-11-26 20:37:21.001986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.002006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.174 [2024-11-26 20:37:21.002021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.002042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.174 [2024-11-26 20:37:21.002057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.002078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.174 [2024-11-26 20:37:21.002092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.002113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.174 [2024-11-26 20:37:21.002128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.002149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.174 [2024-11-26 20:37:21.002164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.002184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.174 [2024-11-26 20:37:21.002199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.002232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.174 [2024-11-26 20:37:21.002249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.002271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.174 [2024-11-26 20:37:21.002286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.002307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.174 [2024-11-26 20:37:21.002321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.002342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.174 [2024-11-26 20:37:21.002357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.002378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.174 [2024-11-26 20:37:21.002400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.002422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.174 [2024-11-26 20:37:21.002438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.002459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.174 [2024-11-26 20:37:21.002473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.002494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.174 [2024-11-26 20:37:21.002509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.002529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.174 [2024-11-26 20:37:21.002544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.002565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.174 [2024-11-26 20:37:21.002580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.002600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.174 [2024-11-26 20:37:21.002615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.002636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.174 [2024-11-26 20:37:21.002650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.002675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.174 [2024-11-26 20:37:21.002689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.002710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.174 [2024-11-26 20:37:21.002725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.002746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.174 [2024-11-26 20:37:21.002760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:41.174 [2024-11-26 20:37:21.002781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.174 [2024-11-26 20:37:21.002795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.002816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.175 [2024-11-26 20:37:21.002837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.002859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.175 [2024-11-26 20:37:21.002874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.002895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.175 [2024-11-26 20:37:21.002909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.002931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.175 [2024-11-26 20:37:21.002946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.002967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.175 [2024-11-26 20:37:21.002982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.003003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.175 [2024-11-26 20:37:21.003018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.003040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.175 [2024-11-26 20:37:21.003055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.003076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.175 [2024-11-26 20:37:21.003091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.003112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.175 [2024-11-26 20:37:21.003126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.003148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.175 [2024-11-26 20:37:21.003162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.003183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.175 [2024-11-26 20:37:21.003197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.003229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.175 [2024-11-26 20:37:21.003247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.003269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.175 [2024-11-26 20:37:21.003283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.003317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.175 [2024-11-26 20:37:21.003333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.003354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.175 [2024-11-26 20:37:21.003369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.003390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.175 [2024-11-26 20:37:21.003405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.003426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.175 [2024-11-26 20:37:21.003441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.003462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.175 [2024-11-26 20:37:21.003477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.003498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.175 [2024-11-26 20:37:21.003513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.003535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.175 [2024-11-26 20:37:21.003550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.003577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.175 [2024-11-26 20:37:21.003593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.003614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.175 [2024-11-26 20:37:21.003644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.003667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.175 [2024-11-26 20:37:21.003682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.003704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.175 [2024-11-26 20:37:21.003719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.003740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.175 [2024-11-26 20:37:21.003755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.003785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.175 [2024-11-26 20:37:21.003801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.003822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.175 [2024-11-26 20:37:21.003837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.003858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.175 [2024-11-26 20:37:21.003873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.003895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.175 [2024-11-26 20:37:21.003910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:41.175 [2024-11-26 20:37:21.003931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.175 [2024-11-26 20:37:21.003946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.003967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.176 [2024-11-26 20:37:21.003981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.004012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.176 [2024-11-26 20:37:21.004027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.004048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.176 [2024-11-26 20:37:21.004063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.004084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.176 [2024-11-26 20:37:21.004098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.004119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.176 [2024-11-26 20:37:21.004134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.004155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.176 [2024-11-26 20:37:21.004170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.004193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.176 [2024-11-26 20:37:21.004208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.004241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.176 [2024-11-26 20:37:21.004287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.004311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.176 [2024-11-26 20:37:21.004326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.004348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.176 [2024-11-26 20:37:21.004363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.004384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.176 [2024-11-26 20:37:21.004399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.004420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.176 [2024-11-26 20:37:21.004435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.004456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.176 [2024-11-26 20:37:21.004471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.004493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.176 [2024-11-26 20:37:21.004508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.007032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.176 [2024-11-26 20:37:21.007064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.007093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.176 [2024-11-26 20:37:21.007111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.007133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.176 [2024-11-26 20:37:21.007148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.007170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.176 [2024-11-26 20:37:21.007186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.007207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.176 [2024-11-26 20:37:21.007236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.007261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.176 [2024-11-26 20:37:21.007301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.007325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.176 [2024-11-26 20:37:21.007341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.007363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.176 [2024-11-26 20:37:21.007378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.007399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.176 [2024-11-26 20:37:21.007415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.007445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.176 [2024-11-26 20:37:21.007461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.007482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.176 [2024-11-26 20:37:21.007498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.007519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.176 [2024-11-26 20:37:21.007534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.007555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.176 [2024-11-26 20:37:21.007570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.007591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.176 [2024-11-26 20:37:21.007606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.007645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.176 [2024-11-26 20:37:21.007667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.007689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.176 [2024-11-26 20:37:21.007705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.007726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.176 [2024-11-26 20:37:21.007742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.007764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.176 [2024-11-26 20:37:21.007779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.007811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.176 [2024-11-26 20:37:21.007827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:41.176 [2024-11-26 20:37:21.007849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.177 [2024-11-26 20:37:21.007865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.007886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.177 [2024-11-26 20:37:21.007902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.007923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.177 [2024-11-26 20:37:21.007939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.007960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.177 [2024-11-26 20:37:21.007976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.007998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.177 [2024-11-26 20:37:21.008014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.008035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.177 [2024-11-26 20:37:21.008051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.008072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.177 [2024-11-26 20:37:21.008088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.008109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.177 [2024-11-26 20:37:21.008125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.008146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.177 [2024-11-26 20:37:21.008161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.008183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.177 [2024-11-26 20:37:21.008198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.008231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.177 [2024-11-26 20:37:21.008249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.008279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.177 [2024-11-26 20:37:21.008296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.008318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.177 [2024-11-26 20:37:21.008333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.008355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.177 [2024-11-26 20:37:21.008371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.008392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.177 [2024-11-26 20:37:21.008407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.008428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.177 [2024-11-26 20:37:21.008444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.008465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.177 [2024-11-26 20:37:21.008480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.008501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.177 [2024-11-26 20:37:21.008517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.008538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.177 [2024-11-26 20:37:21.008553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.008575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.177 [2024-11-26 20:37:21.008590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.008612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.177 [2024-11-26 20:37:21.008628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.008649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.177 [2024-11-26 20:37:21.008672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.008695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.177 [2024-11-26 20:37:21.008710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.008731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.177 [2024-11-26 20:37:21.008754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.008777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.177 [2024-11-26 20:37:21.008792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.008814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.177 [2024-11-26 20:37:21.008829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.008851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.177 [2024-11-26 20:37:21.008866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.008888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.177 [2024-11-26 20:37:21.008904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.008930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.177 [2024-11-26 20:37:21.008955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.008977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.177 [2024-11-26 20:37:21.008992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.009013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.177 [2024-11-26 20:37:21.009029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:41.177 [2024-11-26 20:37:21.009050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.177 [2024-11-26 20:37:21.009065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.009086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.178 [2024-11-26 20:37:21.009102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.009123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.178 [2024-11-26 20:37:21.009138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.009160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.178 [2024-11-26 20:37:21.009176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.009197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.178 [2024-11-26 20:37:21.009242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.009268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.178 [2024-11-26 20:37:21.009283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.009305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.178 [2024-11-26 20:37:21.009321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.009342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.178 [2024-11-26 20:37:21.009357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.009379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.178 [2024-11-26 20:37:21.009394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.009415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.178 [2024-11-26 20:37:21.009430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.009451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.178 [2024-11-26 20:37:21.009466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.009488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.178 [2024-11-26 20:37:21.009503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.009530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.178 [2024-11-26 20:37:21.009545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.009566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.178 [2024-11-26 20:37:21.009581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.009602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.178 [2024-11-26 20:37:21.009617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.009639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.178 [2024-11-26 20:37:21.009653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.009675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.178 [2024-11-26 20:37:21.009690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.009719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.178 [2024-11-26 20:37:21.009735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.009757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.178 [2024-11-26 20:37:21.009772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.009793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.178 [2024-11-26 20:37:21.009809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.009830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.178 [2024-11-26 20:37:21.009846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.009886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.178 [2024-11-26 20:37:21.009906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.009928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.178 [2024-11-26 20:37:21.009943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.009964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.178 [2024-11-26 20:37:21.009980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.010001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.178 [2024-11-26 20:37:21.010016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.010038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.178 [2024-11-26 20:37:21.010053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.010074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.178 [2024-11-26 20:37:21.010089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.010111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.178 [2024-11-26 20:37:21.010126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.010147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.178 [2024-11-26 20:37:21.010162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.010183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.178 [2024-11-26 20:37:21.010207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.010244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.178 [2024-11-26 20:37:21.010262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.010283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.178 [2024-11-26 20:37:21.010299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.010320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.178 [2024-11-26 20:37:21.010335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.010366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.178 [2024-11-26 20:37:21.010382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.010403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.178 [2024-11-26 20:37:21.010419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.010440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.178 [2024-11-26 20:37:21.010455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.010477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.178 [2024-11-26 20:37:21.010492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:41.178 [2024-11-26 20:37:21.010513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.178 [2024-11-26 20:37:21.010529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.010550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.179 [2024-11-26 20:37:21.010566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.010587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.179 [2024-11-26 20:37:21.010603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.010624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.179 [2024-11-26 20:37:21.010639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.010660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.179 [2024-11-26 20:37:21.010683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.010706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.179 [2024-11-26 20:37:21.010721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.010743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.179 [2024-11-26 20:37:21.010758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.010779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.179 [2024-11-26 20:37:21.010794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.010816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.179 [2024-11-26 20:37:21.010831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.010852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.179 [2024-11-26 20:37:21.010867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.010889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.179 [2024-11-26 20:37:21.010905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.010926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.179 [2024-11-26 20:37:21.010941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.010962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.179 [2024-11-26 20:37:21.010977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.010999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.179 [2024-11-26 20:37:21.011014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.011042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.179 [2024-11-26 20:37:21.011058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.011079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.179 [2024-11-26 20:37:21.011095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.011116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.179 [2024-11-26 20:37:21.011131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.011160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.179 [2024-11-26 20:37:21.011176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.011198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.179 [2024-11-26 20:37:21.011213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.011247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.179 [2024-11-26 20:37:21.011264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.011285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.179 [2024-11-26 20:37:21.011301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.011322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.179 [2024-11-26 20:37:21.011338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.011359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.179 [2024-11-26 20:37:21.011374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.011396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.179 [2024-11-26 20:37:21.011411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.011437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.179 [2024-11-26 20:37:21.011453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.011474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.179 [2024-11-26 20:37:21.011495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.011517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.179 [2024-11-26 20:37:21.011533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.011554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.179 [2024-11-26 20:37:21.011570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.011591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.179 [2024-11-26 20:37:21.011607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.011648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.179 [2024-11-26 20:37:21.011665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.011692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.179 [2024-11-26 20:37:21.011708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.011730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.179 [2024-11-26 20:37:21.011746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.011767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.179 [2024-11-26 20:37:21.011783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.011804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.179 [2024-11-26 20:37:21.011820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.011842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.179 [2024-11-26 20:37:21.011857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.011879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.179 [2024-11-26 20:37:21.011894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.011915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.179 [2024-11-26 20:37:21.011931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.011954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.179 [2024-11-26 20:37:21.011969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:41.179 [2024-11-26 20:37:21.011990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.179 [2024-11-26 20:37:21.012006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.012027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.180 [2024-11-26 20:37:21.012043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.014468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.180 [2024-11-26 20:37:21.014497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.014526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.180 [2024-11-26 20:37:21.014557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.014581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.180 [2024-11-26 20:37:21.014598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.014620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.180 [2024-11-26 20:37:21.014635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.014657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.180 [2024-11-26 20:37:21.014672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.014694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.180 [2024-11-26 20:37:21.014709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.014731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.180 [2024-11-26 20:37:21.014746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.014767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.180 [2024-11-26 20:37:21.014783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.014804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.180 [2024-11-26 20:37:21.014819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.014840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.180 [2024-11-26 20:37:21.014856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.014877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.180 [2024-11-26 20:37:21.014892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.014913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.180 [2024-11-26 20:37:21.014928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.014950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.180 [2024-11-26 20:37:21.014965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.014986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.180 [2024-11-26 20:37:21.015008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.015031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.180 [2024-11-26 20:37:21.015047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.015068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.180 [2024-11-26 20:37:21.015083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.015104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.180 [2024-11-26 20:37:21.015120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.015141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.180 [2024-11-26 20:37:21.015156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.015178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.180 [2024-11-26 20:37:21.015193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.015214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.180 [2024-11-26 20:37:21.015243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.015267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.180 [2024-11-26 20:37:21.015282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.015303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.180 [2024-11-26 20:37:21.015318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.015340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.180 [2024-11-26 20:37:21.015355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.015377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.180 [2024-11-26 20:37:21.015392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.015413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.180 [2024-11-26 20:37:21.015428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.015449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.180 [2024-11-26 20:37:21.015464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.015494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.180 [2024-11-26 20:37:21.015509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.015531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.180 [2024-11-26 20:37:21.015546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.015567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.180 [2024-11-26 20:37:21.015582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.015603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.180 [2024-11-26 20:37:21.015618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.015662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.180 [2024-11-26 20:37:21.015680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.015701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.180 [2024-11-26 20:37:21.015717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.015738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.180 [2024-11-26 20:37:21.015753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:41.180 [2024-11-26 20:37:21.015775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.181 [2024-11-26 20:37:21.015790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.015811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.181 [2024-11-26 20:37:21.015826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.015847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.181 [2024-11-26 20:37:21.015863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.015884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.181 [2024-11-26 20:37:21.015899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.015920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.181 [2024-11-26 20:37:21.015935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.015970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.181 [2024-11-26 20:37:21.015986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.181 [2024-11-26 20:37:21.016022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.181 [2024-11-26 20:37:21.016059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.181 [2024-11-26 20:37:21.016095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.181 [2024-11-26 20:37:21.016131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.181 [2024-11-26 20:37:21.016167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.181 [2024-11-26 20:37:21.016204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.181 [2024-11-26 20:37:21.016253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.181 [2024-11-26 20:37:21.016291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.181 [2024-11-26 20:37:21.016333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.181 [2024-11-26 20:37:21.016370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.181 [2024-11-26 20:37:21.016407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.181 [2024-11-26 20:37:21.016451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.181 [2024-11-26 20:37:21.016489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.181 [2024-11-26 20:37:21.016525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.181 [2024-11-26 20:37:21.016561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.181 [2024-11-26 20:37:21.016598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.181 [2024-11-26 20:37:21.016634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.181 [2024-11-26 20:37:21.016670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.181 [2024-11-26 20:37:21.016706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.181 [2024-11-26 20:37:21.016743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.181 [2024-11-26 20:37:21.016779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.181 [2024-11-26 20:37:21.016815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.181 [2024-11-26 20:37:21.016852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.181 [2024-11-26 20:37:21.016888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.181 [2024-11-26 20:37:21.016932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.181 [2024-11-26 20:37:21.016977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.016998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.181 [2024-11-26 20:37:21.017013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.017035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.181 [2024-11-26 20:37:21.017050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:41.181 [2024-11-26 20:37:21.017071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.017086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.017107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.017122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.017144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.017159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.017186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.017202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.025039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.025076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.025102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.025118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.025140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.025156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.025177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.025192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.025244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.025264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.025285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.025300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.025322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.025341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.025363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.025378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.025399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.025414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.025437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.025452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.025474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.025489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.025510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.025525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.025546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.025562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.025583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.025598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.025619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.025635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.025657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.025672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.025694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.025718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.025740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.025756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.025777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.025791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.025813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.025827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.025848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.025863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.025885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.182 [2024-11-26 20:37:21.025899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.025921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.182 [2024-11-26 20:37:21.025936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.025957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.182 [2024-11-26 20:37:21.025972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.025993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.182 [2024-11-26 20:37:21.026007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.026029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.182 [2024-11-26 20:37:21.026044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.026065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.182 [2024-11-26 20:37:21.026080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.026101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.182 [2024-11-26 20:37:21.026116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.026136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.182 [2024-11-26 20:37:21.026159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.026181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.026196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.026217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.026258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:41.182 [2024-11-26 20:37:21.026281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.182 [2024-11-26 20:37:21.026297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:21.026318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.183 [2024-11-26 20:37:21.026333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:21.026354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.183 [2024-11-26 20:37:21.026370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:21.026391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.183 [2024-11-26 20:37:21.026406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:21.026427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.183 [2024-11-26 20:37:21.026442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:21.026463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.183 [2024-11-26 20:37:21.026478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:21.026499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.183 [2024-11-26 20:37:21.026513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:21.026535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.183 [2024-11-26 20:37:21.026550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:21.026571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.183 [2024-11-26 20:37:21.026586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:21.026607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.183 [2024-11-26 20:37:21.026622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:21.026655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.183 [2024-11-26 20:37:21.026671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:21.026692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.183 [2024-11-26 20:37:21.026707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:21.026728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.183 [2024-11-26 20:37:21.026743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:21.026764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.183 [2024-11-26 20:37:21.026779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:21.026800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.183 [2024-11-26 20:37:21.026815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:21.026835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.183 [2024-11-26 20:37:21.026850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:21.026872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.183 [2024-11-26 20:37:21.026887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:21.026908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.183 [2024-11-26 20:37:21.026923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:21.026944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.183 [2024-11-26 20:37:21.026959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:21.026980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.183 [2024-11-26 20:37:21.026995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:21.027016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.183 [2024-11-26 20:37:21.027031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:21.027052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.183 [2024-11-26 20:37:21.027067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:21.027096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.183 [2024-11-26 20:37:21.027112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:21.027133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.183 [2024-11-26 20:37:21.027148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:21.027169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.183 [2024-11-26 20:37:21.027185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:21.027706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.183 [2024-11-26 20:37:21.027736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:41.183 8206.19 IOPS, 32.06 MiB/s [2024-11-26T20:37:41.538Z] 7723.47 IOPS, 30.17 MiB/s [2024-11-26T20:37:41.538Z] 7294.39 IOPS, 28.49 MiB/s [2024-11-26T20:37:41.538Z] 6910.47 IOPS, 26.99 MiB/s [2024-11-26T20:37:41.538Z] 6855.50 IOPS, 26.78 MiB/s [2024-11-26T20:37:41.538Z] 6971.33 IOPS, 27.23 MiB/s [2024-11-26T20:37:41.538Z] 7073.36 IOPS, 27.63 MiB/s [2024-11-26T20:37:41.538Z] 7249.87 IOPS, 28.32 MiB/s [2024-11-26T20:37:41.538Z] 7459.71 IOPS, 29.14 MiB/s [2024-11-26T20:37:41.538Z] 7646.44 IOPS, 29.87 MiB/s [2024-11-26T20:37:41.538Z] 7780.23 IOPS, 30.39 MiB/s [2024-11-26T20:37:41.538Z] 7812.22 IOPS, 30.52 MiB/s [2024-11-26T20:37:41.538Z] 7858.79 IOPS, 30.70 MiB/s [2024-11-26T20:37:41.538Z] 7899.52 IOPS, 30.86 MiB/s [2024-11-26T20:37:41.538Z] 8019.67 IOPS, 31.33 MiB/s [2024-11-26T20:37:41.538Z] 8147.74 IOPS, 31.83 MiB/s [2024-11-26T20:37:41.538Z] 8277.53 IOPS, 32.33 MiB/s [2024-11-26T20:37:41.538Z] [2024-11-26 20:37:38.402177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.183 [2024-11-26 20:37:38.402281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:38.402342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.183 [2024-11-26 20:37:38.402362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:38.402385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.183 [2024-11-26 20:37:38.402401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:38.402425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.183 [2024-11-26 20:37:38.402441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:38.402463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.183 [2024-11-26 20:37:38.402478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:38.402683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.183 [2024-11-26 20:37:38.402707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:38.402731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.183 [2024-11-26 20:37:38.402775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:38.402800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.183 [2024-11-26 20:37:38.402815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:38.402836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.183 [2024-11-26 20:37:38.402851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:41.183 [2024-11-26 20:37:38.402873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.183 [2024-11-26 20:37:38.402888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.402909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.184 [2024-11-26 20:37:38.402923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.402944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.184 [2024-11-26 20:37:38.402960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.402981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.184 [2024-11-26 20:37:38.402996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.403017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.184 [2024-11-26 20:37:38.403044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.403065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.184 [2024-11-26 20:37:38.403080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.403101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.184 [2024-11-26 20:37:38.403116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.403137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.184 [2024-11-26 20:37:38.403153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.403175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.184 [2024-11-26 20:37:38.403190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.403212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.184 [2024-11-26 20:37:38.403241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.403275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.184 [2024-11-26 20:37:38.403291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.403313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.184 [2024-11-26 20:37:38.403328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.403349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.184 [2024-11-26 20:37:38.403364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.403385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.184 [2024-11-26 20:37:38.403401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.403427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.184 [2024-11-26 20:37:38.403443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.403464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.184 [2024-11-26 20:37:38.403479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.403500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.184 [2024-11-26 20:37:38.403515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.403537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.184 [2024-11-26 20:37:38.403552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.403573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.184 [2024-11-26 20:37:38.403596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.403628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.184 [2024-11-26 20:37:38.403649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.403671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.184 [2024-11-26 20:37:38.403687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.403708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.184 [2024-11-26 20:37:38.403723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.403755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.184 [2024-11-26 20:37:38.403772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.403793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.184 [2024-11-26 20:37:38.403809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.403840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.184 [2024-11-26 20:37:38.403856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.405197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.184 [2024-11-26 20:37:38.405225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.405268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.184 [2024-11-26 20:37:38.405288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.405310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.184 [2024-11-26 20:37:38.405342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.405364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.184 [2024-11-26 20:37:38.405379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.405401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.184 [2024-11-26 20:37:38.405416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.405444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.184 [2024-11-26 20:37:38.405459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.405480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.184 [2024-11-26 20:37:38.405495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.405516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.184 [2024-11-26 20:37:38.405531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.405552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.184 [2024-11-26 20:37:38.405567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.405589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.184 [2024-11-26 20:37:38.405617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.405640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.184 [2024-11-26 20:37:38.405656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.405677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.184 [2024-11-26 20:37:38.405692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:41.184 [2024-11-26 20:37:38.405714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.184 [2024-11-26 20:37:38.405729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:41.185 [2024-11-26 20:37:38.405751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.185 [2024-11-26 20:37:38.405766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:41.185 [2024-11-26 20:37:38.405787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.185 [2024-11-26 20:37:38.405804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:41.185 8377.73 IOPS, 32.73 MiB/s [2024-11-26T20:37:41.540Z] 8404.97 IOPS, 32.83 MiB/s [2024-11-26T20:37:41.540Z] 8425.83 IOPS, 32.91 MiB/s [2024-11-26T20:37:41.540Z] Received shutdown signal, test time was about 35.605566 seconds 00:16:41.185 00:16:41.185 Latency(us) 00:16:41.185 [2024-11-26T20:37:41.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.185 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:41.185 Verification LBA range: start 0x0 length 0x4000 00:16:41.185 Nvme0n1 : 35.60 8437.97 32.96 0.00 0.00 15137.86 770.79 4087539.90 00:16:41.185 [2024-11-26T20:37:41.540Z] =================================================================================================================== 00:16:41.185 [2024-11-26T20:37:41.540Z] Total : 8437.97 32.96 0.00 0.00 15137.86 770.79 4087539.90 00:16:41.185 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:41.443 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:41.443 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:41.443 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:41.443 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:41.443 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:16:41.443 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:41.443 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:16:41.443 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:41.443 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:41.443 rmmod nvme_tcp 00:16:41.443 rmmod nvme_fabrics 00:16:41.443 rmmod nvme_keyring 00:16:41.702 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:41.702 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:16:41.702 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:16:41.702 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76585 ']' 00:16:41.702 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76585 00:16:41.702 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76585 ']' 00:16:41.702 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76585 00:16:41.702 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:16:41.702 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:41.702 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76585 00:16:41.702 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:41.702 killing process with pid 76585 00:16:41.702 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:41.702 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76585' 00:16:41.702 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76585 00:16:41.702 20:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76585 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:16:41.961 00:16:41.961 real 0m41.387s 00:16:41.961 user 2m14.797s 00:16:41.961 sys 0m12.008s 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.961 20:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:41.961 ************************************ 00:16:41.961 END TEST nvmf_host_multipath_status 00:16:41.961 ************************************ 00:16:42.220 20:37:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:42.220 20:37:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:42.220 20:37:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:42.220 20:37:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.220 ************************************ 00:16:42.220 START TEST nvmf_discovery_remove_ifc 00:16:42.220 ************************************ 00:16:42.220 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:42.220 * Looking for test storage... 00:16:42.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:42.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.221 --rc genhtml_branch_coverage=1 00:16:42.221 --rc genhtml_function_coverage=1 00:16:42.221 --rc genhtml_legend=1 00:16:42.221 --rc geninfo_all_blocks=1 00:16:42.221 --rc geninfo_unexecuted_blocks=1 00:16:42.221 00:16:42.221 ' 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:42.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.221 --rc genhtml_branch_coverage=1 00:16:42.221 --rc genhtml_function_coverage=1 00:16:42.221 --rc genhtml_legend=1 00:16:42.221 --rc geninfo_all_blocks=1 00:16:42.221 --rc geninfo_unexecuted_blocks=1 00:16:42.221 00:16:42.221 ' 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:42.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.221 --rc genhtml_branch_coverage=1 00:16:42.221 --rc genhtml_function_coverage=1 00:16:42.221 --rc genhtml_legend=1 00:16:42.221 --rc geninfo_all_blocks=1 00:16:42.221 --rc geninfo_unexecuted_blocks=1 00:16:42.221 00:16:42.221 ' 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:42.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.221 --rc genhtml_branch_coverage=1 00:16:42.221 --rc genhtml_function_coverage=1 00:16:42.221 --rc genhtml_legend=1 00:16:42.221 --rc geninfo_all_blocks=1 00:16:42.221 --rc geninfo_unexecuted_blocks=1 00:16:42.221 00:16:42.221 ' 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:42.221 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:42.221 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:42.222 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:42.222 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:42.222 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:42.222 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:42.222 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:42.222 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:42.222 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:42.222 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:42.222 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:42.222 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:42.222 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.222 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:42.222 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:42.481 Cannot find device "nvmf_init_br" 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:42.481 Cannot find device "nvmf_init_br2" 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:42.481 Cannot find device "nvmf_tgt_br" 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:42.481 Cannot find device "nvmf_tgt_br2" 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:42.481 Cannot find device "nvmf_init_br" 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:42.481 Cannot find device "nvmf_init_br2" 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:42.481 Cannot find device "nvmf_tgt_br" 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:16:42.481 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:42.481 Cannot find device "nvmf_tgt_br2" 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:42.482 Cannot find device "nvmf_br" 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:42.482 Cannot find device "nvmf_init_if" 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:42.482 Cannot find device "nvmf_init_if2" 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:42.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:42.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:42.482 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:42.740 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:42.740 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:42.740 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:42.740 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:42.740 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:42.740 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:42.740 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:42.740 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:42.740 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:42.740 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:42.740 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:42.740 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:42.740 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:42.740 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:42.740 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:42.740 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:16:42.740 00:16:42.740 --- 10.0.0.3 ping statistics --- 00:16:42.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.740 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:42.740 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:42.740 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:42.740 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:16:42.740 00:16:42.740 --- 10.0.0.4 ping statistics --- 00:16:42.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.741 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:16:42.741 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:42.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:42.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:42.741 00:16:42.741 --- 10.0.0.1 ping statistics --- 00:16:42.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.741 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:42.741 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:42.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:42.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:16:42.741 00:16:42.741 --- 10.0.0.2 ping statistics --- 00:16:42.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.741 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:16:42.741 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:42.741 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:16:42.741 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:42.741 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:42.741 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:42.741 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:42.741 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:42.741 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:42.741 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:42.741 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:42.741 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:42.741 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:42.741 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:42.741 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77498 00:16:42.741 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:42.741 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77498 00:16:42.741 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77498 ']' 00:16:42.741 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.741 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:42.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.741 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.741 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:42.741 20:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:42.741 [2024-11-26 20:37:43.028494] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:16:42.741 [2024-11-26 20:37:43.028583] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:42.998 [2024-11-26 20:37:43.178294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.998 [2024-11-26 20:37:43.232633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:42.998 [2024-11-26 20:37:43.232701] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:42.998 [2024-11-26 20:37:43.232728] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:42.998 [2024-11-26 20:37:43.232753] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:42.998 [2024-11-26 20:37:43.232760] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:42.998 [2024-11-26 20:37:43.233182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.998 [2024-11-26 20:37:43.288765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:43.256 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.256 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:16:43.256 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:43.256 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:43.256 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:43.256 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.256 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:43.256 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.256 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:43.256 [2024-11-26 20:37:43.416419] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:43.256 [2024-11-26 20:37:43.424526] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:16:43.256 null0 00:16:43.256 [2024-11-26 20:37:43.456440] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:43.256 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.256 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77517 00:16:43.256 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:43.256 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77517 /tmp/host.sock 00:16:43.256 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77517 ']' 00:16:43.256 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:16:43.256 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.256 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:43.256 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:43.256 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.256 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:43.256 [2024-11-26 20:37:43.540738] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:16:43.256 [2024-11-26 20:37:43.540865] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77517 ] 00:16:43.514 [2024-11-26 20:37:43.695305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.514 [2024-11-26 20:37:43.765734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.514 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.514 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:16:43.514 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:43.514 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:43.514 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.514 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:43.514 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.514 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:43.514 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.514 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:43.770 [2024-11-26 20:37:43.868413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:43.770 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.770 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:43.770 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.770 20:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:44.700 [2024-11-26 20:37:44.929640] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:44.700 [2024-11-26 20:37:44.929666] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:44.700 [2024-11-26 20:37:44.929690] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:44.700 [2024-11-26 20:37:44.935707] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:16:44.700 [2024-11-26 20:37:44.990073] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:16:44.700 [2024-11-26 20:37:44.991069] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x22b5000:1 started. 00:16:44.700 [2024-11-26 20:37:44.992993] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:44.700 [2024-11-26 20:37:44.993053] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:44.700 [2024-11-26 20:37:44.993081] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:44.700 [2024-11-26 20:37:44.993099] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:44.700 [2024-11-26 20:37:44.993124] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:44.700 20:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.700 20:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:44.700 20:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:44.700 20:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:44.700 [2024-11-26 20:37:44.998301] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x22b5000 was disconnected and freed. delete nvme_qpair. 00:16:44.700 20:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:44.700 20:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.700 20:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:44.700 20:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:44.700 20:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:44.700 20:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.700 20:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:44.700 20:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:16:44.957 20:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:44.957 20:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:44.957 20:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:44.957 20:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:44.957 20:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:44.957 20:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.957 20:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:44.957 20:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:44.957 20:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:44.957 20:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.957 20:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:44.957 20:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:45.889 20:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:45.889 20:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:45.889 20:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:45.889 20:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.889 20:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:45.889 20:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:45.890 20:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:45.890 20:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.890 20:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:45.890 20:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:46.851 20:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:46.851 20:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:46.851 20:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:46.851 20:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.851 20:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:46.851 20:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:46.851 20:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:47.109 20:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.109 20:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:47.109 20:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:48.041 20:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:48.041 20:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:48.041 20:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:48.041 20:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.041 20:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:48.041 20:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:48.041 20:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:48.041 20:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.041 20:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:48.041 20:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:48.975 20:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:48.975 20:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:48.975 20:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:48.975 20:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.975 20:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:48.975 20:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:48.975 20:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:49.231 20:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.231 20:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:49.231 20:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:50.165 20:37:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:50.165 20:37:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:50.165 20:37:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:50.165 20:37:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:50.165 20:37:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.165 20:37:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:50.165 20:37:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:50.165 20:37:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.165 20:37:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:50.165 20:37:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:50.165 [2024-11-26 20:37:50.420701] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:50.165 [2024-11-26 20:37:50.420959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.165 [2024-11-26 20:37:50.420979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.165 [2024-11-26 20:37:50.420993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.165 [2024-11-26 20:37:50.421002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.165 [2024-11-26 20:37:50.421013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.165 [2024-11-26 20:37:50.421022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.165 [2024-11-26 20:37:50.421042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.165 [2024-11-26 20:37:50.421051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.165 [2024-11-26 20:37:50.421061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.165 [2024-11-26 20:37:50.421072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.165 [2024-11-26 20:37:50.421082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2291250 is same with the state(6) to be set 00:16:50.165 [2024-11-26 20:37:50.430698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2291250 (9): Bad file descriptor 00:16:50.165 [2024-11-26 20:37:50.440721] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:16:50.165 [2024-11-26 20:37:50.440935] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:16:50.165 [2024-11-26 20:37:50.441118] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:16:50.165 [2024-11-26 20:37:50.441251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:50.165 [2024-11-26 20:37:50.441434] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:16:51.098 20:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:51.098 20:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:51.098 20:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:51.098 20:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:51.098 20:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.098 20:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:51.098 20:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:51.356 [2024-11-26 20:37:51.494361] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:51.356 [2024-11-26 20:37:51.494476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2291250 with addr=10.0.0.3, port=4420 00:16:51.356 [2024-11-26 20:37:51.494513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2291250 is same with the state(6) to be set 00:16:51.356 [2024-11-26 20:37:51.494631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2291250 (9): Bad file descriptor 00:16:51.356 [2024-11-26 20:37:51.495184] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:16:51.356 [2024-11-26 20:37:51.495266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:16:51.356 [2024-11-26 20:37:51.495285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:16:51.356 [2024-11-26 20:37:51.495299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:16:51.356 [2024-11-26 20:37:51.495313] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:16:51.356 [2024-11-26 20:37:51.495322] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:16:51.356 [2024-11-26 20:37:51.495339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:16:51.356 [2024-11-26 20:37:51.495359] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:16:51.356 [2024-11-26 20:37:51.495367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:51.356 20:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.356 20:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:51.356 20:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:52.340 [2024-11-26 20:37:52.495414] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:16:52.340 [2024-11-26 20:37:52.495692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:16:52.340 [2024-11-26 20:37:52.495733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:16:52.340 [2024-11-26 20:37:52.495745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:16:52.340 [2024-11-26 20:37:52.495758] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:16:52.340 [2024-11-26 20:37:52.495769] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:16:52.340 [2024-11-26 20:37:52.495777] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:16:52.340 [2024-11-26 20:37:52.495782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:16:52.340 [2024-11-26 20:37:52.495818] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:16:52.340 [2024-11-26 20:37:52.495867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:52.340 [2024-11-26 20:37:52.495882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.340 [2024-11-26 20:37:52.495897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:52.340 [2024-11-26 20:37:52.495907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.340 [2024-11-26 20:37:52.495918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:52.340 [2024-11-26 20:37:52.495927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.340 [2024-11-26 20:37:52.495937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:52.340 [2024-11-26 20:37:52.495946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.340 [2024-11-26 20:37:52.495956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:52.341 [2024-11-26 20:37:52.495966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.341 [2024-11-26 20:37:52.495976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:16:52.341 [2024-11-26 20:37:52.496022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x221ca20 (9): Bad file descriptor 00:16:52.341 [2024-11-26 20:37:52.497007] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:52.341 [2024-11-26 20:37:52.497030] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:16:52.341 20:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:52.341 20:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:52.341 20:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.341 20:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:52.341 20:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:52.341 20:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:52.341 20:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:52.341 20:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.341 20:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:52.341 20:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:52.341 20:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:52.341 20:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:52.341 20:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:52.341 20:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:52.341 20:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.341 20:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:52.341 20:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:52.341 20:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:52.341 20:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:52.341 20:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.341 20:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:52.341 20:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:53.715 20:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:53.715 20:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:53.715 20:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.715 20:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:53.715 20:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:53.715 20:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:53.715 20:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:53.715 20:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.715 20:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:53.715 20:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:54.282 [2024-11-26 20:37:54.504444] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:54.282 [2024-11-26 20:37:54.504477] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:54.282 [2024-11-26 20:37:54.504497] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:54.282 [2024-11-26 20:37:54.510519] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:16:54.282 [2024-11-26 20:37:54.564945] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:16:54.282 [2024-11-26 20:37:54.566134] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x229cd80:1 started. 00:16:54.282 [2024-11-26 20:37:54.567674] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:54.282 [2024-11-26 20:37:54.567847] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:54.282 [2024-11-26 20:37:54.567913] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:54.282 [2024-11-26 20:37:54.568004] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:16:54.282 [2024-11-26 20:37:54.568126] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:54.282 [2024-11-26 20:37:54.573226] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x229cd80 was disconnected and freed. delete nvme_qpair. 00:16:54.541 20:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:54.541 20:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:54.541 20:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.541 20:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:54.541 20:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:54.541 20:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:54.541 20:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:54.541 20:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.541 20:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:54.541 20:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:54.541 20:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77517 00:16:54.541 20:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77517 ']' 00:16:54.541 20:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77517 00:16:54.541 20:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:16:54.541 20:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.541 20:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77517 00:16:54.541 killing process with pid 77517 00:16:54.541 20:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:54.541 20:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:54.541 20:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77517' 00:16:54.541 20:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77517 00:16:54.541 20:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77517 00:16:54.801 20:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:54.801 20:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:54.801 20:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:16:54.801 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:54.801 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:16:54.801 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:54.801 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:54.801 rmmod nvme_tcp 00:16:54.801 rmmod nvme_fabrics 00:16:54.801 rmmod nvme_keyring 00:16:54.801 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:54.801 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:16:54.801 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:16:54.801 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77498 ']' 00:16:54.801 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77498 00:16:54.801 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77498 ']' 00:16:54.801 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77498 00:16:54.801 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:16:54.801 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.801 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77498 00:16:54.801 killing process with pid 77498 00:16:54.801 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:54.801 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:54.801 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77498' 00:16:54.801 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77498 00:16:54.801 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77498 00:16:55.060 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:55.060 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:55.060 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:55.060 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:16:55.060 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:16:55.060 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:55.060 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:16:55.060 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:55.060 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:55.060 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:55.060 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:55.060 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:55.060 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:55.060 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:55.060 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:55.060 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:55.060 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:55.319 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:55.319 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:55.319 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:55.319 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:55.319 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:55.319 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:55.319 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.319 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:55.319 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.319 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:16:55.319 00:16:55.319 real 0m13.226s 00:16:55.319 user 0m22.365s 00:16:55.319 sys 0m2.502s 00:16:55.319 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:55.319 20:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:55.319 ************************************ 00:16:55.319 END TEST nvmf_discovery_remove_ifc 00:16:55.319 ************************************ 00:16:55.319 20:37:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:55.319 20:37:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:55.319 20:37:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:55.319 20:37:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.319 ************************************ 00:16:55.319 START TEST nvmf_identify_kernel_target 00:16:55.319 ************************************ 00:16:55.319 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:55.578 * Looking for test storage... 00:16:55.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:55.578 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:55.578 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:16:55.578 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:55.578 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:55.578 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:55.578 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:55.578 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:55.578 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:55.578 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:55.578 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:55.578 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:55.578 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:55.578 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:55.578 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:55.578 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:55.578 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:16:55.578 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:16:55.578 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:55.578 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:55.578 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:16:55.578 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:16:55.578 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:55.578 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:16:55.578 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:55.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.579 --rc genhtml_branch_coverage=1 00:16:55.579 --rc genhtml_function_coverage=1 00:16:55.579 --rc genhtml_legend=1 00:16:55.579 --rc geninfo_all_blocks=1 00:16:55.579 --rc geninfo_unexecuted_blocks=1 00:16:55.579 00:16:55.579 ' 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:55.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.579 --rc genhtml_branch_coverage=1 00:16:55.579 --rc genhtml_function_coverage=1 00:16:55.579 --rc genhtml_legend=1 00:16:55.579 --rc geninfo_all_blocks=1 00:16:55.579 --rc geninfo_unexecuted_blocks=1 00:16:55.579 00:16:55.579 ' 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:55.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.579 --rc genhtml_branch_coverage=1 00:16:55.579 --rc genhtml_function_coverage=1 00:16:55.579 --rc genhtml_legend=1 00:16:55.579 --rc geninfo_all_blocks=1 00:16:55.579 --rc geninfo_unexecuted_blocks=1 00:16:55.579 00:16:55.579 ' 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:55.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.579 --rc genhtml_branch_coverage=1 00:16:55.579 --rc genhtml_function_coverage=1 00:16:55.579 --rc genhtml_legend=1 00:16:55.579 --rc geninfo_all_blocks=1 00:16:55.579 --rc geninfo_unexecuted_blocks=1 00:16:55.579 00:16:55.579 ' 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:55.579 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:55.579 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:55.580 Cannot find device "nvmf_init_br" 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:55.580 Cannot find device "nvmf_init_br2" 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:55.580 Cannot find device "nvmf_tgt_br" 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:55.580 Cannot find device "nvmf_tgt_br2" 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:55.580 Cannot find device "nvmf_init_br" 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:55.580 Cannot find device "nvmf_init_br2" 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:55.580 Cannot find device "nvmf_tgt_br" 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:16:55.580 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:55.838 Cannot find device "nvmf_tgt_br2" 00:16:55.838 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:16:55.838 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:55.838 Cannot find device "nvmf_br" 00:16:55.838 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:16:55.838 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:55.838 Cannot find device "nvmf_init_if" 00:16:55.839 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:16:55.839 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:55.839 Cannot find device "nvmf_init_if2" 00:16:55.839 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:16:55.839 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:55.839 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:55.839 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:16:55.839 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:55.839 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:55.839 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:16:55.839 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:55.839 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:55.839 20:37:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:55.839 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:55.839 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:55.839 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:55.839 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:55.839 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:55.839 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:55.839 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:55.839 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:55.839 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:55.839 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:55.839 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:55.839 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:55.839 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:55.839 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:55.839 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:55.839 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:55.839 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:55.839 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:55.839 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:55.839 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:55.839 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:55.839 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:56.098 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:56.098 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:16:56.098 00:16:56.098 --- 10.0.0.3 ping statistics --- 00:16:56.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.098 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:56.098 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:56.098 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:16:56.098 00:16:56.098 --- 10.0.0.4 ping statistics --- 00:16:56.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.098 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:56.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:56.098 00:16:56.098 --- 10.0.0.1 ping statistics --- 00:16:56.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.098 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:56.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:16:56.098 00:16:56.098 --- 10.0.0.2 ping statistics --- 00:16:56.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.098 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:56.098 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:56.356 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:56.356 Waiting for block devices as requested 00:16:56.614 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:56.614 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:56.614 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:56.614 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:56.614 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:16:56.614 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:16:56.614 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:56.614 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:56.614 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:16:56.614 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:16:56.614 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:56.614 No valid GPT data, bailing 00:16:56.614 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:56.873 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:56.873 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:56.873 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:16:56.873 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:56.873 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:56.873 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:16:56.873 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:16:56.873 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:56.873 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:56.873 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:16:56.873 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:16:56.873 20:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:56.873 No valid GPT data, bailing 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:56.873 No valid GPT data, bailing 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:56.873 No valid GPT data, bailing 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:56.873 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid=310b31eb-b117-4685-b95a-c58b48fd3835 -a 10.0.0.1 -t tcp -s 4420 00:16:57.132 00:16:57.132 Discovery Log Number of Records 2, Generation counter 2 00:16:57.132 =====Discovery Log Entry 0====== 00:16:57.132 trtype: tcp 00:16:57.132 adrfam: ipv4 00:16:57.132 subtype: current discovery subsystem 00:16:57.132 treq: not specified, sq flow control disable supported 00:16:57.132 portid: 1 00:16:57.133 trsvcid: 4420 00:16:57.133 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:57.133 traddr: 10.0.0.1 00:16:57.133 eflags: none 00:16:57.133 sectype: none 00:16:57.133 =====Discovery Log Entry 1====== 00:16:57.133 trtype: tcp 00:16:57.133 adrfam: ipv4 00:16:57.133 subtype: nvme subsystem 00:16:57.133 treq: not specified, sq flow control disable supported 00:16:57.133 portid: 1 00:16:57.133 trsvcid: 4420 00:16:57.133 subnqn: nqn.2016-06.io.spdk:testnqn 00:16:57.133 traddr: 10.0.0.1 00:16:57.133 eflags: none 00:16:57.133 sectype: none 00:16:57.133 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:16:57.133 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:16:57.133 ===================================================== 00:16:57.133 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:57.133 ===================================================== 00:16:57.133 Controller Capabilities/Features 00:16:57.133 ================================ 00:16:57.133 Vendor ID: 0000 00:16:57.133 Subsystem Vendor ID: 0000 00:16:57.133 Serial Number: 560e7e9e494ece4243f3 00:16:57.133 Model Number: Linux 00:16:57.133 Firmware Version: 6.8.9-20 00:16:57.133 Recommended Arb Burst: 0 00:16:57.133 IEEE OUI Identifier: 00 00 00 00:16:57.133 Multi-path I/O 00:16:57.133 May have multiple subsystem ports: No 00:16:57.133 May have multiple controllers: No 00:16:57.133 Associated with SR-IOV VF: No 00:16:57.133 Max Data Transfer Size: Unlimited 00:16:57.133 Max Number of Namespaces: 0 00:16:57.133 Max Number of I/O Queues: 1024 00:16:57.133 NVMe Specification Version (VS): 1.3 00:16:57.133 NVMe Specification Version (Identify): 1.3 00:16:57.133 Maximum Queue Entries: 1024 00:16:57.133 Contiguous Queues Required: No 00:16:57.133 Arbitration Mechanisms Supported 00:16:57.133 Weighted Round Robin: Not Supported 00:16:57.133 Vendor Specific: Not Supported 00:16:57.133 Reset Timeout: 7500 ms 00:16:57.133 Doorbell Stride: 4 bytes 00:16:57.133 NVM Subsystem Reset: Not Supported 00:16:57.133 Command Sets Supported 00:16:57.133 NVM Command Set: Supported 00:16:57.133 Boot Partition: Not Supported 00:16:57.133 Memory Page Size Minimum: 4096 bytes 00:16:57.133 Memory Page Size Maximum: 4096 bytes 00:16:57.133 Persistent Memory Region: Not Supported 00:16:57.133 Optional Asynchronous Events Supported 00:16:57.133 Namespace Attribute Notices: Not Supported 00:16:57.133 Firmware Activation Notices: Not Supported 00:16:57.133 ANA Change Notices: Not Supported 00:16:57.133 PLE Aggregate Log Change Notices: Not Supported 00:16:57.133 LBA Status Info Alert Notices: Not Supported 00:16:57.133 EGE Aggregate Log Change Notices: Not Supported 00:16:57.133 Normal NVM Subsystem Shutdown event: Not Supported 00:16:57.133 Zone Descriptor Change Notices: Not Supported 00:16:57.133 Discovery Log Change Notices: Supported 00:16:57.133 Controller Attributes 00:16:57.133 128-bit Host Identifier: Not Supported 00:16:57.133 Non-Operational Permissive Mode: Not Supported 00:16:57.133 NVM Sets: Not Supported 00:16:57.133 Read Recovery Levels: Not Supported 00:16:57.133 Endurance Groups: Not Supported 00:16:57.133 Predictable Latency Mode: Not Supported 00:16:57.133 Traffic Based Keep ALive: Not Supported 00:16:57.133 Namespace Granularity: Not Supported 00:16:57.133 SQ Associations: Not Supported 00:16:57.133 UUID List: Not Supported 00:16:57.133 Multi-Domain Subsystem: Not Supported 00:16:57.133 Fixed Capacity Management: Not Supported 00:16:57.133 Variable Capacity Management: Not Supported 00:16:57.133 Delete Endurance Group: Not Supported 00:16:57.133 Delete NVM Set: Not Supported 00:16:57.133 Extended LBA Formats Supported: Not Supported 00:16:57.133 Flexible Data Placement Supported: Not Supported 00:16:57.133 00:16:57.133 Controller Memory Buffer Support 00:16:57.133 ================================ 00:16:57.133 Supported: No 00:16:57.133 00:16:57.133 Persistent Memory Region Support 00:16:57.133 ================================ 00:16:57.133 Supported: No 00:16:57.133 00:16:57.133 Admin Command Set Attributes 00:16:57.133 ============================ 00:16:57.133 Security Send/Receive: Not Supported 00:16:57.133 Format NVM: Not Supported 00:16:57.133 Firmware Activate/Download: Not Supported 00:16:57.133 Namespace Management: Not Supported 00:16:57.133 Device Self-Test: Not Supported 00:16:57.133 Directives: Not Supported 00:16:57.133 NVMe-MI: Not Supported 00:16:57.133 Virtualization Management: Not Supported 00:16:57.133 Doorbell Buffer Config: Not Supported 00:16:57.133 Get LBA Status Capability: Not Supported 00:16:57.133 Command & Feature Lockdown Capability: Not Supported 00:16:57.133 Abort Command Limit: 1 00:16:57.133 Async Event Request Limit: 1 00:16:57.133 Number of Firmware Slots: N/A 00:16:57.133 Firmware Slot 1 Read-Only: N/A 00:16:57.133 Firmware Activation Without Reset: N/A 00:16:57.133 Multiple Update Detection Support: N/A 00:16:57.133 Firmware Update Granularity: No Information Provided 00:16:57.133 Per-Namespace SMART Log: No 00:16:57.133 Asymmetric Namespace Access Log Page: Not Supported 00:16:57.133 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:57.133 Command Effects Log Page: Not Supported 00:16:57.133 Get Log Page Extended Data: Supported 00:16:57.133 Telemetry Log Pages: Not Supported 00:16:57.133 Persistent Event Log Pages: Not Supported 00:16:57.133 Supported Log Pages Log Page: May Support 00:16:57.133 Commands Supported & Effects Log Page: Not Supported 00:16:57.133 Feature Identifiers & Effects Log Page:May Support 00:16:57.133 NVMe-MI Commands & Effects Log Page: May Support 00:16:57.133 Data Area 4 for Telemetry Log: Not Supported 00:16:57.133 Error Log Page Entries Supported: 1 00:16:57.133 Keep Alive: Not Supported 00:16:57.133 00:16:57.133 NVM Command Set Attributes 00:16:57.133 ========================== 00:16:57.133 Submission Queue Entry Size 00:16:57.133 Max: 1 00:16:57.133 Min: 1 00:16:57.133 Completion Queue Entry Size 00:16:57.133 Max: 1 00:16:57.133 Min: 1 00:16:57.133 Number of Namespaces: 0 00:16:57.133 Compare Command: Not Supported 00:16:57.133 Write Uncorrectable Command: Not Supported 00:16:57.133 Dataset Management Command: Not Supported 00:16:57.133 Write Zeroes Command: Not Supported 00:16:57.133 Set Features Save Field: Not Supported 00:16:57.133 Reservations: Not Supported 00:16:57.133 Timestamp: Not Supported 00:16:57.133 Copy: Not Supported 00:16:57.133 Volatile Write Cache: Not Present 00:16:57.133 Atomic Write Unit (Normal): 1 00:16:57.133 Atomic Write Unit (PFail): 1 00:16:57.133 Atomic Compare & Write Unit: 1 00:16:57.133 Fused Compare & Write: Not Supported 00:16:57.133 Scatter-Gather List 00:16:57.133 SGL Command Set: Supported 00:16:57.133 SGL Keyed: Not Supported 00:16:57.133 SGL Bit Bucket Descriptor: Not Supported 00:16:57.133 SGL Metadata Pointer: Not Supported 00:16:57.133 Oversized SGL: Not Supported 00:16:57.133 SGL Metadata Address: Not Supported 00:16:57.133 SGL Offset: Supported 00:16:57.133 Transport SGL Data Block: Not Supported 00:16:57.133 Replay Protected Memory Block: Not Supported 00:16:57.133 00:16:57.133 Firmware Slot Information 00:16:57.133 ========================= 00:16:57.133 Active slot: 0 00:16:57.133 00:16:57.133 00:16:57.133 Error Log 00:16:57.133 ========= 00:16:57.133 00:16:57.133 Active Namespaces 00:16:57.133 ================= 00:16:57.133 Discovery Log Page 00:16:57.133 ================== 00:16:57.133 Generation Counter: 2 00:16:57.133 Number of Records: 2 00:16:57.133 Record Format: 0 00:16:57.133 00:16:57.133 Discovery Log Entry 0 00:16:57.133 ---------------------- 00:16:57.133 Transport Type: 3 (TCP) 00:16:57.133 Address Family: 1 (IPv4) 00:16:57.133 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:57.133 Entry Flags: 00:16:57.133 Duplicate Returned Information: 0 00:16:57.133 Explicit Persistent Connection Support for Discovery: 0 00:16:57.133 Transport Requirements: 00:16:57.133 Secure Channel: Not Specified 00:16:57.133 Port ID: 1 (0x0001) 00:16:57.133 Controller ID: 65535 (0xffff) 00:16:57.133 Admin Max SQ Size: 32 00:16:57.133 Transport Service Identifier: 4420 00:16:57.133 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:57.133 Transport Address: 10.0.0.1 00:16:57.133 Discovery Log Entry 1 00:16:57.133 ---------------------- 00:16:57.133 Transport Type: 3 (TCP) 00:16:57.133 Address Family: 1 (IPv4) 00:16:57.133 Subsystem Type: 2 (NVM Subsystem) 00:16:57.133 Entry Flags: 00:16:57.133 Duplicate Returned Information: 0 00:16:57.133 Explicit Persistent Connection Support for Discovery: 0 00:16:57.133 Transport Requirements: 00:16:57.133 Secure Channel: Not Specified 00:16:57.134 Port ID: 1 (0x0001) 00:16:57.134 Controller ID: 65535 (0xffff) 00:16:57.134 Admin Max SQ Size: 32 00:16:57.134 Transport Service Identifier: 4420 00:16:57.134 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:16:57.134 Transport Address: 10.0.0.1 00:16:57.134 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:16:57.393 get_feature(0x01) failed 00:16:57.393 get_feature(0x02) failed 00:16:57.393 get_feature(0x04) failed 00:16:57.393 ===================================================== 00:16:57.393 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:16:57.393 ===================================================== 00:16:57.393 Controller Capabilities/Features 00:16:57.393 ================================ 00:16:57.393 Vendor ID: 0000 00:16:57.393 Subsystem Vendor ID: 0000 00:16:57.393 Serial Number: 50521a40fe648fd52fcb 00:16:57.393 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:16:57.393 Firmware Version: 6.8.9-20 00:16:57.393 Recommended Arb Burst: 6 00:16:57.393 IEEE OUI Identifier: 00 00 00 00:16:57.393 Multi-path I/O 00:16:57.393 May have multiple subsystem ports: Yes 00:16:57.393 May have multiple controllers: Yes 00:16:57.393 Associated with SR-IOV VF: No 00:16:57.393 Max Data Transfer Size: Unlimited 00:16:57.393 Max Number of Namespaces: 1024 00:16:57.393 Max Number of I/O Queues: 128 00:16:57.393 NVMe Specification Version (VS): 1.3 00:16:57.393 NVMe Specification Version (Identify): 1.3 00:16:57.393 Maximum Queue Entries: 1024 00:16:57.393 Contiguous Queues Required: No 00:16:57.393 Arbitration Mechanisms Supported 00:16:57.393 Weighted Round Robin: Not Supported 00:16:57.393 Vendor Specific: Not Supported 00:16:57.393 Reset Timeout: 7500 ms 00:16:57.393 Doorbell Stride: 4 bytes 00:16:57.393 NVM Subsystem Reset: Not Supported 00:16:57.393 Command Sets Supported 00:16:57.393 NVM Command Set: Supported 00:16:57.393 Boot Partition: Not Supported 00:16:57.393 Memory Page Size Minimum: 4096 bytes 00:16:57.394 Memory Page Size Maximum: 4096 bytes 00:16:57.394 Persistent Memory Region: Not Supported 00:16:57.394 Optional Asynchronous Events Supported 00:16:57.394 Namespace Attribute Notices: Supported 00:16:57.394 Firmware Activation Notices: Not Supported 00:16:57.394 ANA Change Notices: Supported 00:16:57.394 PLE Aggregate Log Change Notices: Not Supported 00:16:57.394 LBA Status Info Alert Notices: Not Supported 00:16:57.394 EGE Aggregate Log Change Notices: Not Supported 00:16:57.394 Normal NVM Subsystem Shutdown event: Not Supported 00:16:57.394 Zone Descriptor Change Notices: Not Supported 00:16:57.394 Discovery Log Change Notices: Not Supported 00:16:57.394 Controller Attributes 00:16:57.394 128-bit Host Identifier: Supported 00:16:57.394 Non-Operational Permissive Mode: Not Supported 00:16:57.394 NVM Sets: Not Supported 00:16:57.394 Read Recovery Levels: Not Supported 00:16:57.394 Endurance Groups: Not Supported 00:16:57.394 Predictable Latency Mode: Not Supported 00:16:57.394 Traffic Based Keep ALive: Supported 00:16:57.394 Namespace Granularity: Not Supported 00:16:57.394 SQ Associations: Not Supported 00:16:57.394 UUID List: Not Supported 00:16:57.394 Multi-Domain Subsystem: Not Supported 00:16:57.394 Fixed Capacity Management: Not Supported 00:16:57.394 Variable Capacity Management: Not Supported 00:16:57.394 Delete Endurance Group: Not Supported 00:16:57.394 Delete NVM Set: Not Supported 00:16:57.394 Extended LBA Formats Supported: Not Supported 00:16:57.394 Flexible Data Placement Supported: Not Supported 00:16:57.394 00:16:57.394 Controller Memory Buffer Support 00:16:57.394 ================================ 00:16:57.394 Supported: No 00:16:57.394 00:16:57.394 Persistent Memory Region Support 00:16:57.394 ================================ 00:16:57.394 Supported: No 00:16:57.394 00:16:57.394 Admin Command Set Attributes 00:16:57.394 ============================ 00:16:57.394 Security Send/Receive: Not Supported 00:16:57.394 Format NVM: Not Supported 00:16:57.394 Firmware Activate/Download: Not Supported 00:16:57.394 Namespace Management: Not Supported 00:16:57.394 Device Self-Test: Not Supported 00:16:57.394 Directives: Not Supported 00:16:57.394 NVMe-MI: Not Supported 00:16:57.394 Virtualization Management: Not Supported 00:16:57.394 Doorbell Buffer Config: Not Supported 00:16:57.394 Get LBA Status Capability: Not Supported 00:16:57.394 Command & Feature Lockdown Capability: Not Supported 00:16:57.394 Abort Command Limit: 4 00:16:57.394 Async Event Request Limit: 4 00:16:57.394 Number of Firmware Slots: N/A 00:16:57.394 Firmware Slot 1 Read-Only: N/A 00:16:57.394 Firmware Activation Without Reset: N/A 00:16:57.394 Multiple Update Detection Support: N/A 00:16:57.394 Firmware Update Granularity: No Information Provided 00:16:57.394 Per-Namespace SMART Log: Yes 00:16:57.394 Asymmetric Namespace Access Log Page: Supported 00:16:57.394 ANA Transition Time : 10 sec 00:16:57.394 00:16:57.394 Asymmetric Namespace Access Capabilities 00:16:57.394 ANA Optimized State : Supported 00:16:57.394 ANA Non-Optimized State : Supported 00:16:57.394 ANA Inaccessible State : Supported 00:16:57.394 ANA Persistent Loss State : Supported 00:16:57.394 ANA Change State : Supported 00:16:57.394 ANAGRPID is not changed : No 00:16:57.394 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:16:57.394 00:16:57.394 ANA Group Identifier Maximum : 128 00:16:57.394 Number of ANA Group Identifiers : 128 00:16:57.394 Max Number of Allowed Namespaces : 1024 00:16:57.394 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:16:57.394 Command Effects Log Page: Supported 00:16:57.394 Get Log Page Extended Data: Supported 00:16:57.394 Telemetry Log Pages: Not Supported 00:16:57.394 Persistent Event Log Pages: Not Supported 00:16:57.394 Supported Log Pages Log Page: May Support 00:16:57.394 Commands Supported & Effects Log Page: Not Supported 00:16:57.394 Feature Identifiers & Effects Log Page:May Support 00:16:57.394 NVMe-MI Commands & Effects Log Page: May Support 00:16:57.394 Data Area 4 for Telemetry Log: Not Supported 00:16:57.394 Error Log Page Entries Supported: 128 00:16:57.394 Keep Alive: Supported 00:16:57.394 Keep Alive Granularity: 1000 ms 00:16:57.394 00:16:57.394 NVM Command Set Attributes 00:16:57.394 ========================== 00:16:57.394 Submission Queue Entry Size 00:16:57.394 Max: 64 00:16:57.394 Min: 64 00:16:57.394 Completion Queue Entry Size 00:16:57.394 Max: 16 00:16:57.394 Min: 16 00:16:57.394 Number of Namespaces: 1024 00:16:57.394 Compare Command: Not Supported 00:16:57.394 Write Uncorrectable Command: Not Supported 00:16:57.394 Dataset Management Command: Supported 00:16:57.394 Write Zeroes Command: Supported 00:16:57.394 Set Features Save Field: Not Supported 00:16:57.394 Reservations: Not Supported 00:16:57.394 Timestamp: Not Supported 00:16:57.394 Copy: Not Supported 00:16:57.394 Volatile Write Cache: Present 00:16:57.394 Atomic Write Unit (Normal): 1 00:16:57.394 Atomic Write Unit (PFail): 1 00:16:57.394 Atomic Compare & Write Unit: 1 00:16:57.394 Fused Compare & Write: Not Supported 00:16:57.394 Scatter-Gather List 00:16:57.394 SGL Command Set: Supported 00:16:57.394 SGL Keyed: Not Supported 00:16:57.394 SGL Bit Bucket Descriptor: Not Supported 00:16:57.394 SGL Metadata Pointer: Not Supported 00:16:57.394 Oversized SGL: Not Supported 00:16:57.394 SGL Metadata Address: Not Supported 00:16:57.394 SGL Offset: Supported 00:16:57.394 Transport SGL Data Block: Not Supported 00:16:57.394 Replay Protected Memory Block: Not Supported 00:16:57.394 00:16:57.394 Firmware Slot Information 00:16:57.394 ========================= 00:16:57.394 Active slot: 0 00:16:57.394 00:16:57.394 Asymmetric Namespace Access 00:16:57.394 =========================== 00:16:57.394 Change Count : 0 00:16:57.394 Number of ANA Group Descriptors : 1 00:16:57.394 ANA Group Descriptor : 0 00:16:57.394 ANA Group ID : 1 00:16:57.394 Number of NSID Values : 1 00:16:57.394 Change Count : 0 00:16:57.394 ANA State : 1 00:16:57.394 Namespace Identifier : 1 00:16:57.394 00:16:57.394 Commands Supported and Effects 00:16:57.394 ============================== 00:16:57.394 Admin Commands 00:16:57.394 -------------- 00:16:57.394 Get Log Page (02h): Supported 00:16:57.394 Identify (06h): Supported 00:16:57.394 Abort (08h): Supported 00:16:57.394 Set Features (09h): Supported 00:16:57.394 Get Features (0Ah): Supported 00:16:57.394 Asynchronous Event Request (0Ch): Supported 00:16:57.394 Keep Alive (18h): Supported 00:16:57.394 I/O Commands 00:16:57.394 ------------ 00:16:57.394 Flush (00h): Supported 00:16:57.394 Write (01h): Supported LBA-Change 00:16:57.394 Read (02h): Supported 00:16:57.394 Write Zeroes (08h): Supported LBA-Change 00:16:57.394 Dataset Management (09h): Supported 00:16:57.394 00:16:57.394 Error Log 00:16:57.394 ========= 00:16:57.394 Entry: 0 00:16:57.394 Error Count: 0x3 00:16:57.394 Submission Queue Id: 0x0 00:16:57.394 Command Id: 0x5 00:16:57.394 Phase Bit: 0 00:16:57.394 Status Code: 0x2 00:16:57.394 Status Code Type: 0x0 00:16:57.394 Do Not Retry: 1 00:16:57.394 Error Location: 0x28 00:16:57.394 LBA: 0x0 00:16:57.394 Namespace: 0x0 00:16:57.394 Vendor Log Page: 0x0 00:16:57.394 ----------- 00:16:57.394 Entry: 1 00:16:57.394 Error Count: 0x2 00:16:57.394 Submission Queue Id: 0x0 00:16:57.394 Command Id: 0x5 00:16:57.394 Phase Bit: 0 00:16:57.394 Status Code: 0x2 00:16:57.394 Status Code Type: 0x0 00:16:57.394 Do Not Retry: 1 00:16:57.394 Error Location: 0x28 00:16:57.394 LBA: 0x0 00:16:57.394 Namespace: 0x0 00:16:57.395 Vendor Log Page: 0x0 00:16:57.395 ----------- 00:16:57.395 Entry: 2 00:16:57.395 Error Count: 0x1 00:16:57.395 Submission Queue Id: 0x0 00:16:57.395 Command Id: 0x4 00:16:57.395 Phase Bit: 0 00:16:57.395 Status Code: 0x2 00:16:57.395 Status Code Type: 0x0 00:16:57.395 Do Not Retry: 1 00:16:57.395 Error Location: 0x28 00:16:57.395 LBA: 0x0 00:16:57.395 Namespace: 0x0 00:16:57.395 Vendor Log Page: 0x0 00:16:57.395 00:16:57.395 Number of Queues 00:16:57.395 ================ 00:16:57.395 Number of I/O Submission Queues: 128 00:16:57.395 Number of I/O Completion Queues: 128 00:16:57.395 00:16:57.395 ZNS Specific Controller Data 00:16:57.395 ============================ 00:16:57.395 Zone Append Size Limit: 0 00:16:57.395 00:16:57.395 00:16:57.395 Active Namespaces 00:16:57.395 ================= 00:16:57.395 get_feature(0x05) failed 00:16:57.395 Namespace ID:1 00:16:57.395 Command Set Identifier: NVM (00h) 00:16:57.395 Deallocate: Supported 00:16:57.395 Deallocated/Unwritten Error: Not Supported 00:16:57.395 Deallocated Read Value: Unknown 00:16:57.395 Deallocate in Write Zeroes: Not Supported 00:16:57.395 Deallocated Guard Field: 0xFFFF 00:16:57.395 Flush: Supported 00:16:57.395 Reservation: Not Supported 00:16:57.395 Namespace Sharing Capabilities: Multiple Controllers 00:16:57.395 Size (in LBAs): 1310720 (5GiB) 00:16:57.395 Capacity (in LBAs): 1310720 (5GiB) 00:16:57.395 Utilization (in LBAs): 1310720 (5GiB) 00:16:57.395 UUID: babac08e-ca07-47cb-a217-18c5a7fdab11 00:16:57.395 Thin Provisioning: Not Supported 00:16:57.395 Per-NS Atomic Units: Yes 00:16:57.395 Atomic Boundary Size (Normal): 0 00:16:57.395 Atomic Boundary Size (PFail): 0 00:16:57.395 Atomic Boundary Offset: 0 00:16:57.395 NGUID/EUI64 Never Reused: No 00:16:57.395 ANA group ID: 1 00:16:57.395 Namespace Write Protected: No 00:16:57.395 Number of LBA Formats: 1 00:16:57.395 Current LBA Format: LBA Format #00 00:16:57.395 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:16:57.395 00:16:57.395 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:16:57.395 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:57.395 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:16:57.395 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:57.395 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:16:57.395 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:57.395 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:57.395 rmmod nvme_tcp 00:16:57.395 rmmod nvme_fabrics 00:16:57.395 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:57.395 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:16:57.395 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:16:57.395 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:16:57.395 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:57.395 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:57.395 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:57.395 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:16:57.395 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:16:57.395 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:57.395 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:16:57.395 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:57.395 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:57.395 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:57.395 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:57.654 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:57.654 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:57.654 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:57.654 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:57.654 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:57.654 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:57.654 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:57.654 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:57.654 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:57.654 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:57.654 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:57.654 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:57.654 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.654 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:57.654 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.654 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:16:57.654 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:16:57.654 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:16:57.654 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:16:57.654 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:57.654 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:57.654 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:57.654 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:57.654 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:16:57.654 20:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:16:57.912 20:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:58.479 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:58.479 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:58.738 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:58.738 ************************************ 00:16:58.738 END TEST nvmf_identify_kernel_target 00:16:58.738 ************************************ 00:16:58.738 00:16:58.738 real 0m3.290s 00:16:58.738 user 0m1.132s 00:16:58.738 sys 0m1.486s 00:16:58.738 20:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:58.738 20:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.738 20:37:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:58.738 20:37:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:58.738 20:37:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:58.738 20:37:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.738 ************************************ 00:16:58.738 START TEST nvmf_auth_host 00:16:58.738 ************************************ 00:16:58.738 20:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:58.738 * Looking for test storage... 00:16:58.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:58.738 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:58.738 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:16:58.738 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:59.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.001 --rc genhtml_branch_coverage=1 00:16:59.001 --rc genhtml_function_coverage=1 00:16:59.001 --rc genhtml_legend=1 00:16:59.001 --rc geninfo_all_blocks=1 00:16:59.001 --rc geninfo_unexecuted_blocks=1 00:16:59.001 00:16:59.001 ' 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:59.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.001 --rc genhtml_branch_coverage=1 00:16:59.001 --rc genhtml_function_coverage=1 00:16:59.001 --rc genhtml_legend=1 00:16:59.001 --rc geninfo_all_blocks=1 00:16:59.001 --rc geninfo_unexecuted_blocks=1 00:16:59.001 00:16:59.001 ' 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:59.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.001 --rc genhtml_branch_coverage=1 00:16:59.001 --rc genhtml_function_coverage=1 00:16:59.001 --rc genhtml_legend=1 00:16:59.001 --rc geninfo_all_blocks=1 00:16:59.001 --rc geninfo_unexecuted_blocks=1 00:16:59.001 00:16:59.001 ' 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:59.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.001 --rc genhtml_branch_coverage=1 00:16:59.001 --rc genhtml_function_coverage=1 00:16:59.001 --rc genhtml_legend=1 00:16:59.001 --rc geninfo_all_blocks=1 00:16:59.001 --rc geninfo_unexecuted_blocks=1 00:16:59.001 00:16:59.001 ' 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:59.001 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:59.001 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:59.002 Cannot find device "nvmf_init_br" 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:59.002 Cannot find device "nvmf_init_br2" 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:59.002 Cannot find device "nvmf_tgt_br" 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:59.002 Cannot find device "nvmf_tgt_br2" 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:59.002 Cannot find device "nvmf_init_br" 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:59.002 Cannot find device "nvmf_init_br2" 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:59.002 Cannot find device "nvmf_tgt_br" 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:59.002 Cannot find device "nvmf_tgt_br2" 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:59.002 Cannot find device "nvmf_br" 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:59.002 Cannot find device "nvmf_init_if" 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:59.002 Cannot find device "nvmf_init_if2" 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:59.002 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:59.002 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:59.002 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:59.260 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:59.260 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:16:59.260 00:16:59.260 --- 10.0.0.3 ping statistics --- 00:16:59.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.260 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:59.260 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:59.260 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:16:59.260 00:16:59.260 --- 10.0.0.4 ping statistics --- 00:16:59.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.260 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:59.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:59.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:16:59.260 00:16:59.260 --- 10.0.0.1 ping statistics --- 00:16:59.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.260 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:16:59.260 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:59.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:59.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:16:59.261 00:16:59.261 --- 10.0.0.2 ping statistics --- 00:16:59.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.261 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:59.261 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:59.261 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:16:59.261 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:59.261 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:59.261 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:59.261 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:59.261 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:59.261 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:59.261 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:59.261 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:16:59.519 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:59.519 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:59.519 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.519 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78505 00:16:59.519 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:16:59.519 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78505 00:16:59.519 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78505 ']' 00:16:59.519 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.519 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.519 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.519 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.519 20:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=09aca7165758d120e9721994e557b67a 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.HG3 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 09aca7165758d120e9721994e557b67a 0 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 09aca7165758d120e9721994e557b67a 0 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=09aca7165758d120e9721994e557b67a 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.HG3 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.HG3 00:16:59.784 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.HG3 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c20408d1486d13d4deaeebe996e4fe814392b652eadcfea5b25ab615dccb6f3a 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.BwN 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c20408d1486d13d4deaeebe996e4fe814392b652eadcfea5b25ab615dccb6f3a 3 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c20408d1486d13d4deaeebe996e4fe814392b652eadcfea5b25ab615dccb6f3a 3 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c20408d1486d13d4deaeebe996e4fe814392b652eadcfea5b25ab615dccb6f3a 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.BwN 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.BwN 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.BwN 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=624c78b863d0cee9856c6b477d45546568b06ad83e3ff24e 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.p92 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 624c78b863d0cee9856c6b477d45546568b06ad83e3ff24e 0 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 624c78b863d0cee9856c6b477d45546568b06ad83e3ff24e 0 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=624c78b863d0cee9856c6b477d45546568b06ad83e3ff24e 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.p92 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.p92 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.p92 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6434b6a97f37fdccce1fa26dcec4199c27a9effe73590720 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.uBx 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6434b6a97f37fdccce1fa26dcec4199c27a9effe73590720 2 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6434b6a97f37fdccce1fa26dcec4199c27a9effe73590720 2 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6434b6a97f37fdccce1fa26dcec4199c27a9effe73590720 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.uBx 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.uBx 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.uBx 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=26f1a8fa1974617891713557e8e6b0fe 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.qz3 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 26f1a8fa1974617891713557e8e6b0fe 1 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 26f1a8fa1974617891713557e8e6b0fe 1 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=26f1a8fa1974617891713557e8e6b0fe 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.qz3 00:17:00.044 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.qz3 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.qz3 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=64a26267a8af7a3f6b40086a98dddb27 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.0vQ 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 64a26267a8af7a3f6b40086a98dddb27 1 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 64a26267a8af7a3f6b40086a98dddb27 1 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=64a26267a8af7a3f6b40086a98dddb27 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.0vQ 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.0vQ 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.0vQ 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=19e7af8d1251bf4ba78d462c456956b7a3b3659418f5c6e8 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.7Z1 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 19e7af8d1251bf4ba78d462c456956b7a3b3659418f5c6e8 2 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 19e7af8d1251bf4ba78d462c456956b7a3b3659418f5c6e8 2 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=19e7af8d1251bf4ba78d462c456956b7a3b3659418f5c6e8 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.7Z1 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.7Z1 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.7Z1 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ff31cc56b61e4ccdc686cc0a71e3bad8 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.vBz 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ff31cc56b61e4ccdc686cc0a71e3bad8 0 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ff31cc56b61e4ccdc686cc0a71e3bad8 0 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ff31cc56b61e4ccdc686cc0a71e3bad8 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.vBz 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.vBz 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.vBz 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=041c7138c2aa6ceb9ce167852e78b4a5feffec0d411560493fd918af01d8f564 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.dfM 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 041c7138c2aa6ceb9ce167852e78b4a5feffec0d411560493fd918af01d8f564 3 00:17:00.303 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 041c7138c2aa6ceb9ce167852e78b4a5feffec0d411560493fd918af01d8f564 3 00:17:00.304 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:00.304 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:00.304 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=041c7138c2aa6ceb9ce167852e78b4a5feffec0d411560493fd918af01d8f564 00:17:00.304 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:17:00.304 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:00.304 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.dfM 00:17:00.304 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.dfM 00:17:00.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.304 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.dfM 00:17:00.304 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:00.304 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78505 00:17:00.304 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78505 ']' 00:17:00.304 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.304 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.304 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.304 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.304 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.891 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.891 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:17:00.891 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:00.891 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.HG3 00:17:00.891 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.891 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.891 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.891 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.BwN ]] 00:17:00.891 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.BwN 00:17:00.891 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.891 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.891 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.891 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:00.891 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.p92 00:17:00.891 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.891 20:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.uBx ]] 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uBx 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.qz3 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.0vQ ]] 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0vQ 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.7Z1 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.vBz ]] 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.vBz 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.dfM 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:00.891 20:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:01.153 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:01.153 Waiting for block devices as requested 00:17:01.153 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:01.411 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:01.978 No valid GPT data, bailing 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:01.978 No valid GPT data, bailing 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:01.978 No valid GPT data, bailing 00:17:01.978 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:02.236 No valid GPT data, bailing 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:02.236 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid=310b31eb-b117-4685-b95a-c58b48fd3835 -a 10.0.0.1 -t tcp -s 4420 00:17:02.236 00:17:02.236 Discovery Log Number of Records 2, Generation counter 2 00:17:02.236 =====Discovery Log Entry 0====== 00:17:02.236 trtype: tcp 00:17:02.236 adrfam: ipv4 00:17:02.236 subtype: current discovery subsystem 00:17:02.236 treq: not specified, sq flow control disable supported 00:17:02.236 portid: 1 00:17:02.236 trsvcid: 4420 00:17:02.236 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:02.236 traddr: 10.0.0.1 00:17:02.236 eflags: none 00:17:02.236 sectype: none 00:17:02.236 =====Discovery Log Entry 1====== 00:17:02.236 trtype: tcp 00:17:02.236 adrfam: ipv4 00:17:02.236 subtype: nvme subsystem 00:17:02.237 treq: not specified, sq flow control disable supported 00:17:02.237 portid: 1 00:17:02.237 trsvcid: 4420 00:17:02.237 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:02.237 traddr: 10.0.0.1 00:17:02.237 eflags: none 00:17:02.237 sectype: none 00:17:02.237 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:02.237 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:02.237 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:02.237 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:02.237 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.237 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:02.237 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:02.237 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:02.237 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:02.237 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:02.237 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:02.237 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:02.237 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:02.237 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: ]] 00:17:02.237 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.495 nvme0n1 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: ]] 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:02.495 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:02.496 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.496 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:02.496 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.496 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.496 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.496 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.496 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:02.496 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:02.496 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:02.496 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.496 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.496 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:02.496 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.496 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:02.496 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:02.496 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:02.496 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.496 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.496 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.754 nvme0n1 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: ]] 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.754 20:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.754 nvme0n1 00:17:02.754 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.754 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.754 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.754 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.754 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.754 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: ]] 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.013 nvme0n1 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: ]] 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:03.013 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.014 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.271 nvme0n1 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.271 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.531 nvme0n1 00:17:03.531 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.531 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.531 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.531 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.531 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.531 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.531 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.531 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.531 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.531 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.531 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.531 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.531 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.531 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:03.531 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.531 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:03.531 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:03.531 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:03.531 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:03.531 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:03.531 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:03.531 20:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: ]] 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.789 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.047 nvme0n1 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: ]] 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.047 nvme0n1 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.047 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: ]] 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.307 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.308 nvme0n1 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: ]] 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.308 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.567 nvme0n1 00:17:04.567 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.567 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.567 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.567 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.567 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.567 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.567 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.567 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.567 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.567 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.567 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.567 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.567 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:04.567 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.567 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:04.567 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:04.567 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.568 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.825 nvme0n1 00:17:04.825 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.825 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.825 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.825 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.825 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.825 20:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.825 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.825 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.825 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.825 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.825 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.825 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.825 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.825 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:04.825 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.825 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:04.825 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:04.825 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:04.825 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:04.825 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:04.825 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:04.825 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:05.392 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:05.392 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: ]] 00:17:05.392 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:05.392 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:05.392 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.392 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:05.392 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:05.392 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:05.392 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.392 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:05.392 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.392 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.392 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.392 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.392 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:05.392 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:05.392 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:05.392 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.392 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.392 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:05.392 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.393 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:05.393 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:05.393 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:05.393 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.393 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.393 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.660 nvme0n1 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: ]] 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.660 20:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.933 nvme0n1 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: ]] 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.933 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:05.934 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:05.934 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:05.934 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.934 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.934 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.192 nvme0n1 00:17:06.192 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.192 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.192 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.192 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.192 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.192 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.192 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: ]] 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.193 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.451 nvme0n1 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.451 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.710 nvme0n1 00:17:06.710 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.710 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.710 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.710 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.710 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.710 20:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.710 20:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.710 20:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.710 20:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.710 20:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.710 20:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.710 20:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.710 20:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.710 20:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:06.710 20:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.710 20:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:06.710 20:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:06.710 20:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:06.710 20:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:06.710 20:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:06.710 20:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:06.710 20:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: ]] 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.611 20:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.869 nvme0n1 00:17:08.869 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: ]] 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.870 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.436 nvme0n1 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: ]] 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:09.436 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:09.437 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:09.437 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.437 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:09.437 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.437 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.437 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.437 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.437 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.437 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.437 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.437 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.437 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.437 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.437 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.437 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.437 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.437 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.437 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.437 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.437 20:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.695 nvme0n1 00:17:09.695 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.695 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.695 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.695 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.695 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.695 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: ]] 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.953 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.211 nvme0n1 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:10.211 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.212 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:10.212 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.212 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.212 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.212 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.212 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:10.212 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:10.212 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:10.212 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.212 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.212 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:10.212 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.212 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:10.212 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:10.212 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:10.212 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:10.212 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.212 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.815 nvme0n1 00:17:10.815 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.815 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.815 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.815 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.815 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.815 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.815 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.815 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.815 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: ]] 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.816 20:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.383 nvme0n1 00:17:11.383 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.383 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.383 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.383 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.383 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.383 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.383 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.383 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.383 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.383 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.383 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.383 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.383 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:11.383 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.383 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.383 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: ]] 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.384 20:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.951 nvme0n1 00:17:11.951 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.951 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.951 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.951 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.951 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.951 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.951 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.951 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.951 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.951 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: ]] 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.209 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.776 nvme0n1 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: ]] 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.776 20:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:12.776 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:12.776 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:12.776 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.777 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:12.777 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.777 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.777 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.777 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.777 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:12.777 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:12.777 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:12.777 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.777 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.777 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:12.777 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.777 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:12.777 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:12.777 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:12.777 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:12.777 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.777 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.344 nvme0n1 00:17:13.344 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.344 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.344 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.344 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.344 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.344 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.344 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.344 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.344 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.344 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.344 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.344 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.344 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:13.344 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.344 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.344 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:13.344 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:13.344 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:13.344 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:13.344 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.344 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:13.344 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:13.344 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:13.344 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:13.344 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.345 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:13.345 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:13.345 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:13.345 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.345 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:13.345 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.345 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.345 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.345 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.345 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:13.345 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:13.345 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:13.345 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.345 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.345 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:13.345 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.345 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:13.345 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:13.345 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:13.345 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:13.345 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.345 20:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.282 nvme0n1 00:17:14.282 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: ]] 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.283 nvme0n1 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: ]] 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:14.283 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:14.284 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:14.284 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.284 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.284 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:14.284 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.284 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:14.284 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:14.284 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:14.284 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.284 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.284 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.542 nvme0n1 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: ]] 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.543 nvme0n1 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.543 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: ]] 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.802 20:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.802 nvme0n1 00:17:14.802 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.802 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.802 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.802 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.802 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.802 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.802 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.802 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.802 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.802 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.802 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.802 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.802 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:14.802 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.802 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:14.802 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:14.802 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.803 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.062 nvme0n1 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: ]] 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.062 nvme0n1 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.062 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: ]] 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.322 nvme0n1 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: ]] 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.322 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.581 nvme0n1 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:15.581 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: ]] 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.582 20:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.841 nvme0n1 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.841 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.100 nvme0n1 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: ]] 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.100 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.359 nvme0n1 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: ]] 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.359 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.360 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.360 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.360 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.360 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.360 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.360 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.360 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.360 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.360 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.360 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.360 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.360 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.619 nvme0n1 00:17:16.619 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.619 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.619 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.619 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.619 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: ]] 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.620 20:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.879 nvme0n1 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: ]] 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.879 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.138 nvme0n1 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.138 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.397 nvme0n1 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: ]] 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.397 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.398 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.655 nvme0n1 00:17:17.655 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.655 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.655 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.655 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.655 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.655 20:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: ]] 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.913 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.172 nvme0n1 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: ]] 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.172 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.738 nvme0n1 00:17:18.738 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.738 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.738 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.738 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.738 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.738 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.738 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.738 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.738 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.738 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.738 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.738 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.738 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: ]] 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.739 20:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.997 nvme0n1 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.997 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.564 nvme0n1 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: ]] 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.564 20:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.132 nvme0n1 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: ]] 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.132 20:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.700 nvme0n1 00:17:20.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.958 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: ]] 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.959 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.525 nvme0n1 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: ]] 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.525 20:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.093 nvme0n1 00:17:22.093 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.093 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.093 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.093 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.093 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.093 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.351 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.351 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.351 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.351 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.351 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.351 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.351 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:22.351 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.351 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:22.351 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:22.351 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:22.351 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:22.351 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:22.351 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:22.351 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:22.351 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:22.351 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:22.351 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:22.351 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:22.351 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:22.351 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:22.351 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:22.351 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:22.351 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:22.352 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.352 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.352 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.352 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:22.352 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:22.352 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:22.352 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:22.352 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.352 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.352 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:22.352 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.352 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:22.352 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:22.352 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:22.352 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:22.352 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.352 20:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.920 nvme0n1 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: ]] 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.920 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.180 nvme0n1 00:17:23.180 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.180 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.180 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.180 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.180 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.180 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.180 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.180 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.180 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: ]] 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.181 nvme0n1 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.181 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.440 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.440 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.440 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.440 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.440 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.440 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.440 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:23.440 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.440 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:23.440 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:23.440 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:23.440 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:23.440 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:23.440 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:23.440 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:23.440 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:23.440 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: ]] 00:17:23.440 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:23.440 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.441 nvme0n1 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: ]] 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.441 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.701 nvme0n1 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.701 20:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.701 nvme0n1 00:17:23.701 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.701 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.701 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.701 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.701 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.701 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: ]] 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.961 nvme0n1 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: ]] 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.961 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.962 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.962 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:23.962 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:23.962 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:23.962 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.962 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.962 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:23.962 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.962 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:23.962 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:23.962 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:23.962 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.962 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.962 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.221 nvme0n1 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: ]] 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.221 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.480 nvme0n1 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: ]] 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.480 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:24.481 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.481 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:24.481 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:24.481 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:24.481 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:24.481 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.481 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.481 nvme0n1 00:17:24.481 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.481 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.481 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.481 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.481 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.481 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.740 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.740 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.740 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.740 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.740 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.740 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.740 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:24.740 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.740 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:24.740 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.741 20:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.741 nvme0n1 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: ]] 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.741 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.001 nvme0n1 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: ]] 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:25.001 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.261 nvme0n1 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: ]] 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.261 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.520 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.520 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.520 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:25.520 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:25.520 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:25.520 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.520 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.520 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:25.520 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.521 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:25.521 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:25.521 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:25.521 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.521 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.521 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.521 nvme0n1 00:17:25.521 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.521 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.521 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.521 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.521 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.521 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.521 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.521 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.521 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.521 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.521 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: ]] 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.780 20:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.780 nvme0n1 00:17:25.780 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.780 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.780 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.780 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.780 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.780 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.780 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.780 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.780 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.780 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.780 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.039 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.040 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.040 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.040 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.040 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.040 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.040 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:26.040 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.040 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.040 nvme0n1 00:17:26.040 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.040 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.040 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.040 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.040 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.040 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.040 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.040 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.040 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.040 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: ]] 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.298 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.558 nvme0n1 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: ]] 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.558 20:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.125 nvme0n1 00:17:27.125 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.125 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.125 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.125 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.125 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: ]] 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.126 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.386 nvme0n1 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: ]] 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.386 20:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.953 nvme0n1 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.953 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.211 nvme0n1 00:17:28.211 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.211 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.211 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.211 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.211 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.211 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.211 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.211 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.211 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.211 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.211 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.211 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.211 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.211 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:28.211 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.211 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:28.211 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:28.211 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:28.211 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:28.211 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlhY2E3MTY1NzU4ZDEyMGU5NzIxOTk0ZTU1N2I2N2G9etFT: 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: ]] 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzIwNDA4ZDE0ODZkMTNkNGRlYWVlYmU5OTZlNGZlODE0MzkyYjY1MmVhZGNmZWE1YjI1YWI2MTVkY2NiNmYzYaCgi/M=: 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.212 20:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.147 nvme0n1 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: ]] 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.147 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.714 nvme0n1 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: ]] 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.714 20:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.280 nvme0n1 00:17:30.280 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.280 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.280 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.280 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.280 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.280 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.280 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.280 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.280 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.280 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.280 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.280 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.280 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:30.280 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.280 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:30.280 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:30.280 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:30.280 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:30.280 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:30.280 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTllN2FmOGQxMjUxYmY0YmE3OGQ0NjJjNDU2OTU2YjdhM2IzNjU5NDE4ZjVjNmU4n7ZRUw==: 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: ]] 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmYzMWNjNTZiNjFlNGNjZGM2ODZjYzBhNzFlM2JhZDhs4ZC0: 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.281 20:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.845 nvme0n1 00:17:30.845 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.845 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.845 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.845 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.845 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.845 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.104 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.104 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.104 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.104 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.104 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.104 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.104 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:31.104 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.104 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:31.104 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:31.104 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:31.104 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:31.104 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:31.104 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:31.104 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:31.104 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQxYzcxMzhjMmFhNmNlYjljZTE2Nzg1MmU3OGI0YTVmZWZmZWMwZDQxMTU2MDQ5M2ZkOTE4YWYwMWQ4ZjU2NDGsFog=: 00:17:31.104 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:31.104 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:31.104 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.104 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:31.105 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:31.105 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:31.105 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.105 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:31.105 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.105 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.105 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.105 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.105 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:31.105 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:31.105 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:31.105 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.105 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.105 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:31.105 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.105 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:31.105 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:31.105 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:31.105 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:31.105 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.105 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.686 nvme0n1 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: ]] 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.686 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.686 request: 00:17:31.686 { 00:17:31.686 "name": "nvme0", 00:17:31.686 "trtype": "tcp", 00:17:31.686 "traddr": "10.0.0.1", 00:17:31.686 "adrfam": "ipv4", 00:17:31.686 "trsvcid": "4420", 00:17:31.686 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:31.686 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:31.686 "prchk_reftag": false, 00:17:31.686 "prchk_guard": false, 00:17:31.686 "hdgst": false, 00:17:31.686 "ddgst": false, 00:17:31.686 "allow_unrecognized_csi": false, 00:17:31.686 "method": "bdev_nvme_attach_controller", 00:17:31.686 "req_id": 1 00:17:31.686 } 00:17:31.686 Got JSON-RPC error response 00:17:31.686 response: 00:17:31.686 { 00:17:31.686 "code": -5, 00:17:31.686 "message": "Input/output error" 00:17:31.686 } 00:17:31.687 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:31.687 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:31.687 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:31.687 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:31.687 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:31.687 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.687 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:31.687 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.687 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.687 20:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.687 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:31.687 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:31.687 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:31.687 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:31.687 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:31.687 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.687 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.687 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:31.687 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.687 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:31.687 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:31.687 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:31.687 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:31.687 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:31.687 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:31.687 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:31.687 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.687 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:31.687 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.687 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:31.687 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.687 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.945 request: 00:17:31.945 { 00:17:31.945 "name": "nvme0", 00:17:31.945 "trtype": "tcp", 00:17:31.945 "traddr": "10.0.0.1", 00:17:31.945 "adrfam": "ipv4", 00:17:31.945 "trsvcid": "4420", 00:17:31.945 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:31.945 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:31.945 "prchk_reftag": false, 00:17:31.945 "prchk_guard": false, 00:17:31.945 "hdgst": false, 00:17:31.945 "ddgst": false, 00:17:31.945 "dhchap_key": "key2", 00:17:31.945 "allow_unrecognized_csi": false, 00:17:31.945 "method": "bdev_nvme_attach_controller", 00:17:31.945 "req_id": 1 00:17:31.945 } 00:17:31.945 Got JSON-RPC error response 00:17:31.945 response: 00:17:31.945 { 00:17:31.945 "code": -5, 00:17:31.945 "message": "Input/output error" 00:17:31.945 } 00:17:31.945 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:31.945 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:31.945 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:31.945 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:31.945 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:31.945 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.945 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:31.945 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.945 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.946 request: 00:17:31.946 { 00:17:31.946 "name": "nvme0", 00:17:31.946 "trtype": "tcp", 00:17:31.946 "traddr": "10.0.0.1", 00:17:31.946 "adrfam": "ipv4", 00:17:31.946 "trsvcid": "4420", 00:17:31.946 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:31.946 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:31.946 "prchk_reftag": false, 00:17:31.946 "prchk_guard": false, 00:17:31.946 "hdgst": false, 00:17:31.946 "ddgst": false, 00:17:31.946 "dhchap_key": "key1", 00:17:31.946 "dhchap_ctrlr_key": "ckey2", 00:17:31.946 "allow_unrecognized_csi": false, 00:17:31.946 "method": "bdev_nvme_attach_controller", 00:17:31.946 "req_id": 1 00:17:31.946 } 00:17:31.946 Got JSON-RPC error response 00:17:31.946 response: 00:17:31.946 { 00:17:31.946 "code": -5, 00:17:31.946 "message": "Input/output error" 00:17:31.946 } 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.946 nvme0n1 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: ]] 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.946 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.205 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.205 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:32.205 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:32.205 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:32.205 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:32.205 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.205 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:32.205 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.205 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:32.205 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.205 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.205 request: 00:17:32.205 { 00:17:32.205 "name": "nvme0", 00:17:32.205 "dhchap_key": "key1", 00:17:32.205 "dhchap_ctrlr_key": "ckey2", 00:17:32.205 "method": "bdev_nvme_set_keys", 00:17:32.205 "req_id": 1 00:17:32.205 } 00:17:32.205 Got JSON-RPC error response 00:17:32.205 response: 00:17:32.205 { 00:17:32.205 "code": -13, 00:17:32.205 "message": "Permission denied" 00:17:32.205 } 00:17:32.205 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:32.205 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:32.205 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:32.205 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:32.205 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:32.205 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.205 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.205 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.205 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:32.205 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.205 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:17:32.205 20:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:17:33.141 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.141 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:33.141 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.141 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.141 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.141 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:17:33.141 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:33.141 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.142 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:33.142 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:33.142 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:33.142 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:33.142 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:33.142 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:33.142 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:33.142 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0Yzc4Yjg2M2QwY2VlOTg1NmM2YjQ3N2Q0NTU0NjU2OGIwNmFkODNlM2ZmMjRlcuZlZg==: 00:17:33.142 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: ]] 00:17:33.142 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzNGI2YTk3ZjM3ZmRjY2NlMWZhMjZkY2VjNDE5OWMyN2E5ZWZmZTczNTkwNzIwUq9xqg==: 00:17:33.142 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:17:33.142 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:33.142 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:33.142 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:33.142 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.142 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.142 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:33.142 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.142 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:33.142 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:33.142 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:33.142 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:33.142 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.142 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.400 nvme0n1 00:17:33.400 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.400 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:33.400 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.400 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:33.400 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:33.400 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:33.400 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:33.400 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:33.400 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:33.400 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:33.400 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjZmMWE4ZmExOTc0NjE3ODkxNzEzNTU3ZThlNmIwZmW8U1uU: 00:17:33.400 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: ]] 00:17:33.400 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjRhMjYyNjdhOGFmN2EzZjZiNDAwODZhOThkZGRiMjeso36s: 00:17:33.401 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:33.401 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:33.401 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:33.401 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:33.401 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.401 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:33.401 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.401 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:33.401 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.401 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.401 request: 00:17:33.401 { 00:17:33.401 "name": "nvme0", 00:17:33.401 "dhchap_key": "key2", 00:17:33.401 "dhchap_ctrlr_key": "ckey1", 00:17:33.401 "method": "bdev_nvme_set_keys", 00:17:33.401 "req_id": 1 00:17:33.401 } 00:17:33.401 Got JSON-RPC error response 00:17:33.401 response: 00:17:33.401 { 00:17:33.401 "code": -13, 00:17:33.401 "message": "Permission denied" 00:17:33.401 } 00:17:33.401 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:33.401 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:33.401 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:33.401 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:33.401 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:33.401 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.401 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:33.401 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.401 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.401 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.401 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:17:33.401 20:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:17:34.336 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.336 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:34.336 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.336 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.336 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.595 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:17:34.595 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:17:34.595 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:17:34.595 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:34.595 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:34.595 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:17:34.595 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:34.595 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:17:34.595 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:34.595 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:34.595 rmmod nvme_tcp 00:17:34.595 rmmod nvme_fabrics 00:17:34.595 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:34.595 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:17:34.595 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:17:34.595 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78505 ']' 00:17:34.595 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78505 00:17:34.595 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78505 ']' 00:17:34.595 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78505 00:17:34.595 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:17:34.595 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.595 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78505 00:17:34.595 killing process with pid 78505 00:17:34.595 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:34.595 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:34.595 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78505' 00:17:34.595 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78505 00:17:34.595 20:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78505 00:17:34.853 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:34.853 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:34.853 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:34.853 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:17:34.853 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:17:34.853 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:34.853 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:17:34.853 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:34.853 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:34.853 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:34.853 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:34.853 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:34.853 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:34.853 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:34.853 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:34.853 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:34.853 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:34.853 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:34.853 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:34.853 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:34.853 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:34.853 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:35.112 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:35.112 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.112 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.112 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.112 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:17:35.112 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:35.112 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:35.112 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:35.112 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:35.112 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:17:35.113 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:35.113 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:35.113 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:35.113 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:35.113 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:17:35.113 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:17:35.113 20:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:35.680 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:35.939 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:35.939 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:35.939 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.HG3 /tmp/spdk.key-null.p92 /tmp/spdk.key-sha256.qz3 /tmp/spdk.key-sha384.7Z1 /tmp/spdk.key-sha512.dfM /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:35.939 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:36.198 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:36.457 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:36.457 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:36.457 00:17:36.457 real 0m37.632s 00:17:36.457 user 0m33.954s 00:17:36.457 sys 0m3.873s 00:17:36.457 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.457 ************************************ 00:17:36.457 END TEST nvmf_auth_host 00:17:36.457 ************************************ 00:17:36.457 20:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.457 20:38:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:17:36.457 20:38:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:36.457 20:38:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:36.457 20:38:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.457 20:38:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.457 ************************************ 00:17:36.457 START TEST nvmf_digest 00:17:36.457 ************************************ 00:17:36.457 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:36.457 * Looking for test storage... 00:17:36.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:36.457 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:36.457 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:17:36.457 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:36.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.716 --rc genhtml_branch_coverage=1 00:17:36.716 --rc genhtml_function_coverage=1 00:17:36.716 --rc genhtml_legend=1 00:17:36.716 --rc geninfo_all_blocks=1 00:17:36.716 --rc geninfo_unexecuted_blocks=1 00:17:36.716 00:17:36.716 ' 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:36.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.716 --rc genhtml_branch_coverage=1 00:17:36.716 --rc genhtml_function_coverage=1 00:17:36.716 --rc genhtml_legend=1 00:17:36.716 --rc geninfo_all_blocks=1 00:17:36.716 --rc geninfo_unexecuted_blocks=1 00:17:36.716 00:17:36.716 ' 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:36.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.716 --rc genhtml_branch_coverage=1 00:17:36.716 --rc genhtml_function_coverage=1 00:17:36.716 --rc genhtml_legend=1 00:17:36.716 --rc geninfo_all_blocks=1 00:17:36.716 --rc geninfo_unexecuted_blocks=1 00:17:36.716 00:17:36.716 ' 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:36.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.716 --rc genhtml_branch_coverage=1 00:17:36.716 --rc genhtml_function_coverage=1 00:17:36.716 --rc genhtml_legend=1 00:17:36.716 --rc geninfo_all_blocks=1 00:17:36.716 --rc geninfo_unexecuted_blocks=1 00:17:36.716 00:17:36.716 ' 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.716 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:36.717 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:36.717 Cannot find device "nvmf_init_br" 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:36.717 Cannot find device "nvmf_init_br2" 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:36.717 Cannot find device "nvmf_tgt_br" 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:36.717 Cannot find device "nvmf_tgt_br2" 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:36.717 Cannot find device "nvmf_init_br" 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:36.717 Cannot find device "nvmf_init_br2" 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:36.717 Cannot find device "nvmf_tgt_br" 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:36.717 Cannot find device "nvmf_tgt_br2" 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:17:36.717 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:36.718 Cannot find device "nvmf_br" 00:17:36.718 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:17:36.718 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:36.718 Cannot find device "nvmf_init_if" 00:17:36.718 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:17:36.718 20:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:36.718 Cannot find device "nvmf_init_if2" 00:17:36.718 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:17:36.718 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:36.718 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:36.718 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:17:36.718 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:36.718 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:36.718 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:17:36.718 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:36.718 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:36.718 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:36.718 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:36.718 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:36.718 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:36.976 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:36.976 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:36.976 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:36.976 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:36.976 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:36.976 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:36.976 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:36.976 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:36.976 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:36.976 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:36.976 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:36.976 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:36.976 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:36.976 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:36.977 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:36.977 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:17:36.977 00:17:36.977 --- 10.0.0.3 ping statistics --- 00:17:36.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.977 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:36.977 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:36.977 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:17:36.977 00:17:36.977 --- 10.0.0.4 ping statistics --- 00:17:36.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.977 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:36.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:17:36.977 00:17:36.977 --- 10.0.0.1 ping statistics --- 00:17:36.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.977 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:36.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:17:36.977 00:17:36.977 --- 10.0.0.2 ping statistics --- 00:17:36.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.977 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:36.977 ************************************ 00:17:36.977 START TEST nvmf_digest_clean 00:17:36.977 ************************************ 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=80152 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 80152 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80152 ']' 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.977 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:37.236 [2024-11-26 20:38:37.410745] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:17:37.236 [2024-11-26 20:38:37.411165] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.236 [2024-11-26 20:38:37.579791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.495 [2024-11-26 20:38:37.667651] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.495 [2024-11-26 20:38:37.667710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.495 [2024-11-26 20:38:37.667725] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.495 [2024-11-26 20:38:37.667736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.495 [2024-11-26 20:38:37.667746] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.495 [2024-11-26 20:38:37.668197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.495 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.495 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:37.495 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:37.495 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:37.495 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:37.495 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.495 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:37.495 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:37.495 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:37.495 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.495 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:37.495 [2024-11-26 20:38:37.833019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:37.754 null0 00:17:37.754 [2024-11-26 20:38:37.891309] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.754 [2024-11-26 20:38:37.915446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:37.754 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.754 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:37.754 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:37.754 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:37.754 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:37.754 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:37.754 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:37.754 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:37.754 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80176 00:17:37.754 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:37.754 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80176 /var/tmp/bperf.sock 00:17:37.754 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80176 ']' 00:17:37.754 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:37.754 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.754 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:37.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:37.754 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.754 20:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:37.754 [2024-11-26 20:38:37.972475] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:17:37.754 [2024-11-26 20:38:37.972686] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80176 ] 00:17:38.012 [2024-11-26 20:38:38.122955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.012 [2024-11-26 20:38:38.190851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.012 20:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.012 20:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:38.012 20:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:38.013 20:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:38.013 20:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:38.271 [2024-11-26 20:38:38.575287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:38.530 20:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:38.530 20:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:38.788 nvme0n1 00:17:38.788 20:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:38.789 20:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:38.789 Running I/O for 2 seconds... 00:17:41.101 14986.00 IOPS, 58.54 MiB/s [2024-11-26T20:38:41.456Z] 15113.00 IOPS, 59.04 MiB/s 00:17:41.101 Latency(us) 00:17:41.101 [2024-11-26T20:38:41.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.101 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:41.101 nvme0n1 : 2.00 15140.22 59.14 0.00 0.00 8447.45 7536.64 23473.80 00:17:41.101 [2024-11-26T20:38:41.456Z] =================================================================================================================== 00:17:41.101 [2024-11-26T20:38:41.456Z] Total : 15140.22 59.14 0.00 0.00 8447.45 7536.64 23473.80 00:17:41.101 { 00:17:41.101 "results": [ 00:17:41.101 { 00:17:41.101 "job": "nvme0n1", 00:17:41.101 "core_mask": "0x2", 00:17:41.101 "workload": "randread", 00:17:41.101 "status": "finished", 00:17:41.101 "queue_depth": 128, 00:17:41.101 "io_size": 4096, 00:17:41.101 "runtime": 2.004858, 00:17:41.101 "iops": 15140.22439494468, 00:17:41.101 "mibps": 59.14150154275266, 00:17:41.101 "io_failed": 0, 00:17:41.101 "io_timeout": 0, 00:17:41.101 "avg_latency_us": 8447.448275201112, 00:17:41.101 "min_latency_us": 7536.64, 00:17:41.101 "max_latency_us": 23473.803636363635 00:17:41.101 } 00:17:41.101 ], 00:17:41.102 "core_count": 1 00:17:41.102 } 00:17:41.102 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:41.102 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:41.102 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:41.102 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:41.102 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:41.102 | select(.opcode=="crc32c") 00:17:41.102 | "\(.module_name) \(.executed)"' 00:17:41.102 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:41.102 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:41.102 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:41.102 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:41.102 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80176 00:17:41.102 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80176 ']' 00:17:41.102 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80176 00:17:41.102 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:41.102 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:41.102 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80176 00:17:41.361 killing process with pid 80176 00:17:41.361 Received shutdown signal, test time was about 2.000000 seconds 00:17:41.361 00:17:41.361 Latency(us) 00:17:41.361 [2024-11-26T20:38:41.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.361 [2024-11-26T20:38:41.716Z] =================================================================================================================== 00:17:41.361 [2024-11-26T20:38:41.716Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:41.361 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:41.361 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:41.361 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80176' 00:17:41.361 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80176 00:17:41.361 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80176 00:17:41.361 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:41.361 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:41.361 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:41.361 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:41.361 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:41.361 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:41.361 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:41.361 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:41.361 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80229 00:17:41.361 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80229 /var/tmp/bperf.sock 00:17:41.361 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80229 ']' 00:17:41.361 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:41.361 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:41.361 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:41.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:41.361 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:41.361 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:41.620 [2024-11-26 20:38:41.740446] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:17:41.620 [2024-11-26 20:38:41.741353] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80229 ] 00:17:41.620 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:41.620 Zero copy mechanism will not be used. 00:17:41.620 [2024-11-26 20:38:41.886703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.620 [2024-11-26 20:38:41.955620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.880 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.880 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:41.880 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:41.880 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:41.880 20:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:42.140 [2024-11-26 20:38:42.345372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:42.140 20:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:42.140 20:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:42.399 nvme0n1 00:17:42.399 20:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:42.399 20:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:42.658 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:42.658 Zero copy mechanism will not be used. 00:17:42.658 Running I/O for 2 seconds... 00:17:44.531 7472.00 IOPS, 934.00 MiB/s [2024-11-26T20:38:44.886Z] 7336.00 IOPS, 917.00 MiB/s 00:17:44.532 Latency(us) 00:17:44.532 [2024-11-26T20:38:44.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.532 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:44.532 nvme0n1 : 2.00 7333.17 916.65 0.00 0.00 2178.43 1876.71 9830.40 00:17:44.532 [2024-11-26T20:38:44.887Z] =================================================================================================================== 00:17:44.532 [2024-11-26T20:38:44.887Z] Total : 7333.17 916.65 0.00 0.00 2178.43 1876.71 9830.40 00:17:44.532 { 00:17:44.532 "results": [ 00:17:44.532 { 00:17:44.532 "job": "nvme0n1", 00:17:44.532 "core_mask": "0x2", 00:17:44.532 "workload": "randread", 00:17:44.532 "status": "finished", 00:17:44.532 "queue_depth": 16, 00:17:44.532 "io_size": 131072, 00:17:44.532 "runtime": 2.002953, 00:17:44.532 "iops": 7333.172570699362, 00:17:44.532 "mibps": 916.6465713374203, 00:17:44.532 "io_failed": 0, 00:17:44.532 "io_timeout": 0, 00:17:44.532 "avg_latency_us": 2178.43146761735, 00:17:44.532 "min_latency_us": 1876.7127272727273, 00:17:44.532 "max_latency_us": 9830.4 00:17:44.532 } 00:17:44.532 ], 00:17:44.532 "core_count": 1 00:17:44.532 } 00:17:44.532 20:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:44.532 20:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:44.532 20:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:44.532 20:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:44.532 20:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:44.532 | select(.opcode=="crc32c") 00:17:44.532 | "\(.module_name) \(.executed)"' 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80229 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80229 ']' 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80229 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80229 00:17:45.099 killing process with pid 80229 00:17:45.099 Received shutdown signal, test time was about 2.000000 seconds 00:17:45.099 00:17:45.099 Latency(us) 00:17:45.099 [2024-11-26T20:38:45.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.099 [2024-11-26T20:38:45.454Z] =================================================================================================================== 00:17:45.099 [2024-11-26T20:38:45.454Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80229' 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80229 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80229 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80282 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80282 /var/tmp/bperf.sock 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80282 ']' 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:45.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:45.099 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:45.359 [2024-11-26 20:38:45.474926] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:17:45.359 [2024-11-26 20:38:45.475714] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80282 ] 00:17:45.359 [2024-11-26 20:38:45.619134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.359 [2024-11-26 20:38:45.676128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.617 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:45.617 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:45.617 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:45.617 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:45.617 20:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:45.876 [2024-11-26 20:38:46.055651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:45.876 20:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:45.876 20:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:46.134 nvme0n1 00:17:46.134 20:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:46.134 20:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:46.393 Running I/O for 2 seconds... 00:17:48.707 16003.00 IOPS, 62.51 MiB/s [2024-11-26T20:38:49.062Z] 16256.50 IOPS, 63.50 MiB/s 00:17:48.707 Latency(us) 00:17:48.707 [2024-11-26T20:38:49.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.707 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:48.707 nvme0n1 : 2.01 16301.43 63.68 0.00 0.00 7845.04 6970.65 15252.01 00:17:48.707 [2024-11-26T20:38:49.062Z] =================================================================================================================== 00:17:48.707 [2024-11-26T20:38:49.062Z] Total : 16301.43 63.68 0.00 0.00 7845.04 6970.65 15252.01 00:17:48.707 { 00:17:48.707 "results": [ 00:17:48.707 { 00:17:48.707 "job": "nvme0n1", 00:17:48.707 "core_mask": "0x2", 00:17:48.707 "workload": "randwrite", 00:17:48.707 "status": "finished", 00:17:48.707 "queue_depth": 128, 00:17:48.707 "io_size": 4096, 00:17:48.707 "runtime": 2.01013, 00:17:48.707 "iops": 16301.433240636177, 00:17:48.707 "mibps": 63.67747359623507, 00:17:48.707 "io_failed": 0, 00:17:48.707 "io_timeout": 0, 00:17:48.707 "avg_latency_us": 7845.044545454545, 00:17:48.707 "min_latency_us": 6970.647272727273, 00:17:48.707 "max_latency_us": 15252.014545454545 00:17:48.707 } 00:17:48.707 ], 00:17:48.707 "core_count": 1 00:17:48.707 } 00:17:48.707 20:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:48.707 20:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:48.707 20:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:48.707 20:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:48.707 | select(.opcode=="crc32c") 00:17:48.707 | "\(.module_name) \(.executed)"' 00:17:48.707 20:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:48.707 20:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:48.707 20:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:48.707 20:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:48.707 20:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:48.707 20:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80282 00:17:48.707 20:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80282 ']' 00:17:48.707 20:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80282 00:17:48.707 20:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:48.707 20:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:48.707 20:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80282 00:17:48.707 killing process with pid 80282 00:17:48.707 Received shutdown signal, test time was about 2.000000 seconds 00:17:48.707 00:17:48.707 Latency(us) 00:17:48.707 [2024-11-26T20:38:49.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.707 [2024-11-26T20:38:49.062Z] =================================================================================================================== 00:17:48.707 [2024-11-26T20:38:49.062Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:48.707 20:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:48.707 20:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:48.707 20:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80282' 00:17:48.707 20:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80282 00:17:48.707 20:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80282 00:17:48.965 20:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:48.965 20:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:48.965 20:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:48.965 20:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:48.965 20:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:48.965 20:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:48.965 20:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:48.965 20:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80330 00:17:48.965 20:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:48.965 20:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80330 /var/tmp/bperf.sock 00:17:48.965 20:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80330 ']' 00:17:48.965 20:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:48.965 20:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.965 20:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:48.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:48.965 20:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.965 20:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:48.965 [2024-11-26 20:38:49.239878] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:17:48.965 [2024-11-26 20:38:49.240681] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80330 ] 00:17:48.965 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:48.965 Zero copy mechanism will not be used. 00:17:49.224 [2024-11-26 20:38:49.391159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.224 [2024-11-26 20:38:49.454396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.238 20:38:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.238 20:38:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:50.238 20:38:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:50.238 20:38:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:50.238 20:38:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:50.238 [2024-11-26 20:38:50.513439] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:50.238 20:38:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:50.238 20:38:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:50.805 nvme0n1 00:17:50.805 20:38:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:50.805 20:38:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:50.805 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:50.805 Zero copy mechanism will not be used. 00:17:50.805 Running I/O for 2 seconds... 00:17:53.123 6530.00 IOPS, 816.25 MiB/s [2024-11-26T20:38:53.478Z] 6534.00 IOPS, 816.75 MiB/s 00:17:53.123 Latency(us) 00:17:53.123 [2024-11-26T20:38:53.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.123 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:53.123 nvme0n1 : 2.00 6531.98 816.50 0.00 0.00 2443.65 1720.32 10426.18 00:17:53.123 [2024-11-26T20:38:53.478Z] =================================================================================================================== 00:17:53.123 [2024-11-26T20:38:53.478Z] Total : 6531.98 816.50 0.00 0.00 2443.65 1720.32 10426.18 00:17:53.123 { 00:17:53.123 "results": [ 00:17:53.123 { 00:17:53.123 "job": "nvme0n1", 00:17:53.123 "core_mask": "0x2", 00:17:53.123 "workload": "randwrite", 00:17:53.123 "status": "finished", 00:17:53.123 "queue_depth": 16, 00:17:53.123 "io_size": 131072, 00:17:53.123 "runtime": 2.004141, 00:17:53.123 "iops": 6531.975544634834, 00:17:53.123 "mibps": 816.4969430793542, 00:17:53.123 "io_failed": 0, 00:17:53.123 "io_timeout": 0, 00:17:53.123 "avg_latency_us": 2443.6509413129074, 00:17:53.123 "min_latency_us": 1720.32, 00:17:53.123 "max_latency_us": 10426.181818181818 00:17:53.123 } 00:17:53.123 ], 00:17:53.123 "core_count": 1 00:17:53.123 } 00:17:53.123 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:53.123 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:53.123 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:53.123 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:53.123 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:53.123 | select(.opcode=="crc32c") 00:17:53.123 | "\(.module_name) \(.executed)"' 00:17:53.123 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:53.123 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:53.123 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:53.123 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:53.123 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80330 00:17:53.123 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80330 ']' 00:17:53.123 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80330 00:17:53.124 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:53.124 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.124 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80330 00:17:53.124 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:53.124 killing process with pid 80330 00:17:53.124 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:53.124 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80330' 00:17:53.124 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80330 00:17:53.124 Received shutdown signal, test time was about 2.000000 seconds 00:17:53.124 00:17:53.124 Latency(us) 00:17:53.124 [2024-11-26T20:38:53.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.124 [2024-11-26T20:38:53.479Z] =================================================================================================================== 00:17:53.124 [2024-11-26T20:38:53.479Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:53.124 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80330 00:17:53.384 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80152 00:17:53.384 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80152 ']' 00:17:53.384 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80152 00:17:53.384 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:53.384 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.384 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80152 00:17:53.384 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:53.384 killing process with pid 80152 00:17:53.384 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:53.384 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80152' 00:17:53.384 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80152 00:17:53.384 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80152 00:17:53.644 00:17:53.644 real 0m16.547s 00:17:53.644 user 0m32.639s 00:17:53.644 sys 0m4.562s 00:17:53.644 ************************************ 00:17:53.644 END TEST nvmf_digest_clean 00:17:53.644 ************************************ 00:17:53.644 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.644 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:53.644 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:17:53.644 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:53.644 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.644 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:53.644 ************************************ 00:17:53.644 START TEST nvmf_digest_error 00:17:53.644 ************************************ 00:17:53.644 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:17:53.644 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:17:53.644 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:53.644 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:53.644 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:53.644 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80419 00:17:53.644 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80419 00:17:53.644 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:53.644 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80419 ']' 00:17:53.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.644 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.644 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.644 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.644 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.644 20:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:53.644 [2024-11-26 20:38:53.975215] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:17:53.644 [2024-11-26 20:38:53.975348] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.903 [2024-11-26 20:38:54.119452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.903 [2024-11-26 20:38:54.176943] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.903 [2024-11-26 20:38:54.177019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.903 [2024-11-26 20:38:54.177046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.903 [2024-11-26 20:38:54.177055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.903 [2024-11-26 20:38:54.177063] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.903 [2024-11-26 20:38:54.177475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.903 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.903 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:53.903 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:53.903 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:53.903 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:54.162 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.162 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:54.162 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.162 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:54.162 [2024-11-26 20:38:54.281904] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:54.162 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.162 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:17:54.162 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:17:54.163 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.163 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:54.163 [2024-11-26 20:38:54.343980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:54.163 null0 00:17:54.163 [2024-11-26 20:38:54.400159] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.163 [2024-11-26 20:38:54.424329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:54.163 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.163 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:17:54.163 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:54.163 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:54.163 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:54.163 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:54.163 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80448 00:17:54.163 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80448 /var/tmp/bperf.sock 00:17:54.163 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:54.163 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80448 ']' 00:17:54.163 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:54.163 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:54.163 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:54.163 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.163 20:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:54.163 [2024-11-26 20:38:54.487654] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:17:54.163 [2024-11-26 20:38:54.487754] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80448 ] 00:17:54.422 [2024-11-26 20:38:54.640623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.422 [2024-11-26 20:38:54.709285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.422 [2024-11-26 20:38:54.770924] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:55.359 20:38:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.359 20:38:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:55.359 20:38:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:55.359 20:38:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:55.617 20:38:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:55.617 20:38:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.617 20:38:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:55.617 20:38:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.617 20:38:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:55.617 20:38:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:55.874 nvme0n1 00:17:55.874 20:38:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:55.874 20:38:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.874 20:38:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:55.874 20:38:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.874 20:38:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:55.874 20:38:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:56.133 Running I/O for 2 seconds... 00:17:56.133 [2024-11-26 20:38:56.276928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.133 [2024-11-26 20:38:56.276999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.133 [2024-11-26 20:38:56.277030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.133 [2024-11-26 20:38:56.295125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.133 [2024-11-26 20:38:56.295347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.133 [2024-11-26 20:38:56.295367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.133 [2024-11-26 20:38:56.313100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.133 [2024-11-26 20:38:56.313139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.133 [2024-11-26 20:38:56.313167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.133 [2024-11-26 20:38:56.330042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.133 [2024-11-26 20:38:56.330276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.133 [2024-11-26 20:38:56.330303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.133 [2024-11-26 20:38:56.348166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.133 [2024-11-26 20:38:56.348206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.133 [2024-11-26 20:38:56.348248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.133 [2024-11-26 20:38:56.365893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.133 [2024-11-26 20:38:56.365932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.133 [2024-11-26 20:38:56.365962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.133 [2024-11-26 20:38:56.383108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.133 [2024-11-26 20:38:56.383161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.133 [2024-11-26 20:38:56.383191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.133 [2024-11-26 20:38:56.401205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.133 [2024-11-26 20:38:56.401439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.133 [2024-11-26 20:38:56.401457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.133 [2024-11-26 20:38:56.419804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.133 [2024-11-26 20:38:56.419845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.133 [2024-11-26 20:38:56.419860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.133 [2024-11-26 20:38:56.437819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.133 [2024-11-26 20:38:56.437856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.133 [2024-11-26 20:38:56.437885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.133 [2024-11-26 20:38:56.455777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.133 [2024-11-26 20:38:56.455817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.133 [2024-11-26 20:38:56.455830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.133 [2024-11-26 20:38:56.472895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.133 [2024-11-26 20:38:56.472956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.133 [2024-11-26 20:38:56.472986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.392 [2024-11-26 20:38:56.491017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.392 [2024-11-26 20:38:56.491054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.392 [2024-11-26 20:38:56.491083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.392 [2024-11-26 20:38:56.509265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.392 [2024-11-26 20:38:56.509303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.392 [2024-11-26 20:38:56.509348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.392 [2024-11-26 20:38:56.527113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.392 [2024-11-26 20:38:56.527188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.392 [2024-11-26 20:38:56.527202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.392 [2024-11-26 20:38:56.545207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.392 [2024-11-26 20:38:56.545431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.392 [2024-11-26 20:38:56.545449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.392 [2024-11-26 20:38:56.563798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.392 [2024-11-26 20:38:56.563846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.392 [2024-11-26 20:38:56.563861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.392 [2024-11-26 20:38:56.581901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.392 [2024-11-26 20:38:56.581946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.392 [2024-11-26 20:38:56.581960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.392 [2024-11-26 20:38:56.600210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.392 [2024-11-26 20:38:56.600261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.392 [2024-11-26 20:38:56.600292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.392 [2024-11-26 20:38:56.618380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.392 [2024-11-26 20:38:56.618561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.392 [2024-11-26 20:38:56.618579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.392 [2024-11-26 20:38:56.636093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.392 [2024-11-26 20:38:56.636144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.392 [2024-11-26 20:38:56.636157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.392 [2024-11-26 20:38:56.654506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.392 [2024-11-26 20:38:56.654690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.392 [2024-11-26 20:38:56.654707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.392 [2024-11-26 20:38:56.672707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.392 [2024-11-26 20:38:56.672928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.392 [2024-11-26 20:38:56.672945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.392 [2024-11-26 20:38:56.690102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.392 [2024-11-26 20:38:56.690142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.392 [2024-11-26 20:38:56.690157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.392 [2024-11-26 20:38:56.707771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.392 [2024-11-26 20:38:56.707817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.392 [2024-11-26 20:38:56.707831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.392 [2024-11-26 20:38:56.725844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.392 [2024-11-26 20:38:56.725894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.392 [2024-11-26 20:38:56.725926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.392 [2024-11-26 20:38:56.742743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.392 [2024-11-26 20:38:56.742914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.393 [2024-11-26 20:38:56.742934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.651 [2024-11-26 20:38:56.760426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.651 [2024-11-26 20:38:56.760467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.651 [2024-11-26 20:38:56.760481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.651 [2024-11-26 20:38:56.777804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.651 [2024-11-26 20:38:56.777845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.651 [2024-11-26 20:38:56.777859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.651 [2024-11-26 20:38:56.796044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.651 [2024-11-26 20:38:56.796098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.651 [2024-11-26 20:38:56.796112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.651 [2024-11-26 20:38:56.814171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.651 [2024-11-26 20:38:56.814360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.651 [2024-11-26 20:38:56.814378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.651 [2024-11-26 20:38:56.831942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.651 [2024-11-26 20:38:56.831985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.651 [2024-11-26 20:38:56.832001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.651 [2024-11-26 20:38:56.848854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.651 [2024-11-26 20:38:56.849012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.651 [2024-11-26 20:38:56.849029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.651 [2024-11-26 20:38:56.867067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.651 [2024-11-26 20:38:56.867133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.651 [2024-11-26 20:38:56.867148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.651 [2024-11-26 20:38:56.884413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.651 [2024-11-26 20:38:56.884470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.651 [2024-11-26 20:38:56.884486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.651 [2024-11-26 20:38:56.901160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.651 [2024-11-26 20:38:56.901214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.651 [2024-11-26 20:38:56.901242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.651 [2024-11-26 20:38:56.919131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.651 [2024-11-26 20:38:56.919333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.651 [2024-11-26 20:38:56.919351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.651 [2024-11-26 20:38:56.936385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.651 [2024-11-26 20:38:56.936426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.651 [2024-11-26 20:38:56.936440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.651 [2024-11-26 20:38:56.953234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.651 [2024-11-26 20:38:56.953424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.651 [2024-11-26 20:38:56.953451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.651 [2024-11-26 20:38:56.971706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.651 [2024-11-26 20:38:56.971746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.651 [2024-11-26 20:38:56.971759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.651 [2024-11-26 20:38:56.989580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.651 [2024-11-26 20:38:56.989619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.651 [2024-11-26 20:38:56.989634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.909 [2024-11-26 20:38:57.007816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.909 [2024-11-26 20:38:57.007891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.909 [2024-11-26 20:38:57.007905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.909 [2024-11-26 20:38:57.025841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.909 [2024-11-26 20:38:57.025880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.909 [2024-11-26 20:38:57.025893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.909 [2024-11-26 20:38:57.043232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.909 [2024-11-26 20:38:57.043315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.909 [2024-11-26 20:38:57.043347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.909 [2024-11-26 20:38:57.060565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.909 [2024-11-26 20:38:57.060763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.909 [2024-11-26 20:38:57.060797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.909 [2024-11-26 20:38:57.078471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.909 [2024-11-26 20:38:57.078551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.909 [2024-11-26 20:38:57.078568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.909 [2024-11-26 20:38:57.095262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.909 [2024-11-26 20:38:57.095524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.909 [2024-11-26 20:38:57.095544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.909 [2024-11-26 20:38:57.113971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.909 [2024-11-26 20:38:57.114010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.909 [2024-11-26 20:38:57.114041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.909 [2024-11-26 20:38:57.132731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.909 [2024-11-26 20:38:57.132777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.909 [2024-11-26 20:38:57.132791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.909 [2024-11-26 20:38:57.150244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.909 [2024-11-26 20:38:57.150314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.909 [2024-11-26 20:38:57.150361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.909 [2024-11-26 20:38:57.167982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.909 [2024-11-26 20:38:57.168019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.909 [2024-11-26 20:38:57.168047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.910 [2024-11-26 20:38:57.185692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.910 [2024-11-26 20:38:57.185746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.910 [2024-11-26 20:38:57.185776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.910 [2024-11-26 20:38:57.203424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.910 [2024-11-26 20:38:57.203468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.910 [2024-11-26 20:38:57.203482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.910 [2024-11-26 20:38:57.220577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.910 [2024-11-26 20:38:57.220647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.910 [2024-11-26 20:38:57.220662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.910 [2024-11-26 20:38:57.237482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.910 [2024-11-26 20:38:57.237717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.910 [2024-11-26 20:38:57.237737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.910 14169.00 IOPS, 55.35 MiB/s [2024-11-26T20:38:57.265Z] [2024-11-26 20:38:57.256580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:56.910 [2024-11-26 20:38:57.256623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.910 [2024-11-26 20:38:57.256637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.167 [2024-11-26 20:38:57.273435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.167 [2024-11-26 20:38:57.273589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.167 [2024-11-26 20:38:57.273607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.167 [2024-11-26 20:38:57.289917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.167 [2024-11-26 20:38:57.289953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.167 [2024-11-26 20:38:57.289982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.167 [2024-11-26 20:38:57.306307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.167 [2024-11-26 20:38:57.306374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.167 [2024-11-26 20:38:57.306405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.167 [2024-11-26 20:38:57.322538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.167 [2024-11-26 20:38:57.322577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.167 [2024-11-26 20:38:57.322590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.167 [2024-11-26 20:38:57.339059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.167 [2024-11-26 20:38:57.339105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.167 [2024-11-26 20:38:57.339135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.167 [2024-11-26 20:38:57.356082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.167 [2024-11-26 20:38:57.356267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.167 [2024-11-26 20:38:57.356285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.167 [2024-11-26 20:38:57.372704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.167 [2024-11-26 20:38:57.372771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.167 [2024-11-26 20:38:57.372786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.167 [2024-11-26 20:38:57.397043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.168 [2024-11-26 20:38:57.397087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.168 [2024-11-26 20:38:57.397118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.168 [2024-11-26 20:38:57.415121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.168 [2024-11-26 20:38:57.415168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.168 [2024-11-26 20:38:57.415203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.168 [2024-11-26 20:38:57.434554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.168 [2024-11-26 20:38:57.434608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.168 [2024-11-26 20:38:57.434622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.168 [2024-11-26 20:38:57.453592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.168 [2024-11-26 20:38:57.453634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.168 [2024-11-26 20:38:57.453664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.168 [2024-11-26 20:38:57.472068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.168 [2024-11-26 20:38:57.472431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.168 [2024-11-26 20:38:57.472451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.168 [2024-11-26 20:38:57.489817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.168 [2024-11-26 20:38:57.489866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.168 [2024-11-26 20:38:57.489897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.168 [2024-11-26 20:38:57.506788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.168 [2024-11-26 20:38:57.506985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.168 [2024-11-26 20:38:57.507018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.425 [2024-11-26 20:38:57.523602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.425 [2024-11-26 20:38:57.523652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.425 [2024-11-26 20:38:57.523666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.425 [2024-11-26 20:38:57.540405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.425 [2024-11-26 20:38:57.540464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.425 [2024-11-26 20:38:57.540478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.425 [2024-11-26 20:38:57.557149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.425 [2024-11-26 20:38:57.557189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.425 [2024-11-26 20:38:57.557218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.425 [2024-11-26 20:38:57.574420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.425 [2024-11-26 20:38:57.574462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.425 [2024-11-26 20:38:57.574476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.425 [2024-11-26 20:38:57.592447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.425 [2024-11-26 20:38:57.592483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.425 [2024-11-26 20:38:57.592511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.425 [2024-11-26 20:38:57.609408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.426 [2024-11-26 20:38:57.609624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.426 [2024-11-26 20:38:57.609642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.426 [2024-11-26 20:38:57.626177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.426 [2024-11-26 20:38:57.626249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.426 [2024-11-26 20:38:57.626280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.426 [2024-11-26 20:38:57.642121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.426 [2024-11-26 20:38:57.642186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.426 [2024-11-26 20:38:57.642217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.426 [2024-11-26 20:38:57.658972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.426 [2024-11-26 20:38:57.659014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.426 [2024-11-26 20:38:57.659028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.426 [2024-11-26 20:38:57.676264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.426 [2024-11-26 20:38:57.676330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.426 [2024-11-26 20:38:57.676346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.426 [2024-11-26 20:38:57.692971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.426 [2024-11-26 20:38:57.693019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.426 [2024-11-26 20:38:57.693034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.426 [2024-11-26 20:38:57.710696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.426 [2024-11-26 20:38:57.711088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.426 [2024-11-26 20:38:57.711107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.426 [2024-11-26 20:38:57.728806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.426 [2024-11-26 20:38:57.728843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.426 [2024-11-26 20:38:57.728872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.426 [2024-11-26 20:38:57.745750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.426 [2024-11-26 20:38:57.745788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.426 [2024-11-26 20:38:57.745802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.426 [2024-11-26 20:38:57.761944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.426 [2024-11-26 20:38:57.761980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.426 [2024-11-26 20:38:57.762009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.683 [2024-11-26 20:38:57.779779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.683 [2024-11-26 20:38:57.779819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.684 [2024-11-26 20:38:57.779833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.684 [2024-11-26 20:38:57.796689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.684 [2024-11-26 20:38:57.796725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.684 [2024-11-26 20:38:57.796753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.684 [2024-11-26 20:38:57.813482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.684 [2024-11-26 20:38:57.813565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.684 [2024-11-26 20:38:57.813610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.684 [2024-11-26 20:38:57.831480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.684 [2024-11-26 20:38:57.831547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.684 [2024-11-26 20:38:57.831615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.684 [2024-11-26 20:38:57.848887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.684 [2024-11-26 20:38:57.848925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.684 [2024-11-26 20:38:57.848953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.684 [2024-11-26 20:38:57.865383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.684 [2024-11-26 20:38:57.865419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.684 [2024-11-26 20:38:57.865448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.684 [2024-11-26 20:38:57.882945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.684 [2024-11-26 20:38:57.883162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.684 [2024-11-26 20:38:57.883196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.684 [2024-11-26 20:38:57.900287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.684 [2024-11-26 20:38:57.900325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.684 [2024-11-26 20:38:57.900355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.684 [2024-11-26 20:38:57.918152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.684 [2024-11-26 20:38:57.918215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.684 [2024-11-26 20:38:57.918257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.684 [2024-11-26 20:38:57.936515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.684 [2024-11-26 20:38:57.936560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.684 [2024-11-26 20:38:57.936575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.684 [2024-11-26 20:38:57.953736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.684 [2024-11-26 20:38:57.953778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.684 [2024-11-26 20:38:57.953792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.684 [2024-11-26 20:38:57.970744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.684 [2024-11-26 20:38:57.970782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.684 [2024-11-26 20:38:57.970811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.684 [2024-11-26 20:38:57.988305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.684 [2024-11-26 20:38:57.988344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.684 [2024-11-26 20:38:57.988359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.684 [2024-11-26 20:38:58.004902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.684 [2024-11-26 20:38:58.004940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.684 [2024-11-26 20:38:58.004970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.684 [2024-11-26 20:38:58.022651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.684 [2024-11-26 20:38:58.022689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.684 [2024-11-26 20:38:58.022719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.942 [2024-11-26 20:38:58.039752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.942 [2024-11-26 20:38:58.039791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.942 [2024-11-26 20:38:58.039804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.942 [2024-11-26 20:38:58.056216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.942 [2024-11-26 20:38:58.056276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.942 [2024-11-26 20:38:58.056291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.942 [2024-11-26 20:38:58.073760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.942 [2024-11-26 20:38:58.073975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.942 [2024-11-26 20:38:58.073994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.942 [2024-11-26 20:38:58.090683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.942 [2024-11-26 20:38:58.090741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.942 [2024-11-26 20:38:58.090755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.942 [2024-11-26 20:38:58.107784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.942 [2024-11-26 20:38:58.107946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.942 [2024-11-26 20:38:58.107963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.942 [2024-11-26 20:38:58.125173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.942 [2024-11-26 20:38:58.125210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.942 [2024-11-26 20:38:58.125266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.942 [2024-11-26 20:38:58.142092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.942 [2024-11-26 20:38:58.142128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.942 [2024-11-26 20:38:58.142157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.942 [2024-11-26 20:38:58.160558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.942 [2024-11-26 20:38:58.160623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.942 [2024-11-26 20:38:58.160637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.942 [2024-11-26 20:38:58.178564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.942 [2024-11-26 20:38:58.178606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.942 [2024-11-26 20:38:58.178637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.942 [2024-11-26 20:38:58.195530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.942 [2024-11-26 20:38:58.195591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.942 [2024-11-26 20:38:58.195621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.942 [2024-11-26 20:38:58.212366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.942 [2024-11-26 20:38:58.212411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.942 [2024-11-26 20:38:58.212440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.942 [2024-11-26 20:38:58.229031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.942 [2024-11-26 20:38:58.229076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.942 [2024-11-26 20:38:58.229106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.942 [2024-11-26 20:38:58.245103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbc3fb0) 00:17:57.942 [2024-11-26 20:38:58.245139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.942 [2024-11-26 20:38:58.245167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.942 14421.50 IOPS, 56.33 MiB/s 00:17:57.942 Latency(us) 00:17:57.942 [2024-11-26T20:38:58.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.942 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:57.942 nvme0n1 : 2.01 14438.65 56.40 0.00 0.00 8858.01 7626.01 33840.41 00:17:57.942 [2024-11-26T20:38:58.297Z] =================================================================================================================== 00:17:57.942 [2024-11-26T20:38:58.298Z] Total : 14438.65 56.40 0.00 0.00 8858.01 7626.01 33840.41 00:17:57.943 { 00:17:57.943 "results": [ 00:17:57.943 { 00:17:57.943 "job": "nvme0n1", 00:17:57.943 "core_mask": "0x2", 00:17:57.943 "workload": "randread", 00:17:57.943 "status": "finished", 00:17:57.943 "queue_depth": 128, 00:17:57.943 "io_size": 4096, 00:17:57.943 "runtime": 2.006489, 00:17:57.943 "iops": 14438.65378778553, 00:17:57.943 "mibps": 56.400991358537226, 00:17:57.943 "io_failed": 0, 00:17:57.943 "io_timeout": 0, 00:17:57.943 "avg_latency_us": 8858.00740728189, 00:17:57.943 "min_latency_us": 7626.007272727273, 00:17:57.943 "max_latency_us": 33840.40727272727 00:17:57.943 } 00:17:57.943 ], 00:17:57.943 "core_count": 1 00:17:57.943 } 00:17:57.943 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:57.943 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:57.943 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:57.943 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:57.943 | .driver_specific 00:17:57.943 | .nvme_error 00:17:57.943 | .status_code 00:17:57.943 | .command_transient_transport_error' 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 113 > 0 )) 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80448 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80448 ']' 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80448 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80448 00:17:58.509 killing process with pid 80448 00:17:58.509 Received shutdown signal, test time was about 2.000000 seconds 00:17:58.509 00:17:58.509 Latency(us) 00:17:58.509 [2024-11-26T20:38:58.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.509 [2024-11-26T20:38:58.864Z] =================================================================================================================== 00:17:58.509 [2024-11-26T20:38:58.864Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80448' 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80448 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80448 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80504 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80504 /var/tmp/bperf.sock 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80504 ']' 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:58.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.509 20:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:58.768 [2024-11-26 20:38:58.891788] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:17:58.769 [2024-11-26 20:38:58.892284] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:17:58.769 Zero copy mechanism will not be used. 00:17:58.769 llocations --file-prefix=spdk_pid80504 ] 00:17:58.769 [2024-11-26 20:38:59.040908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.769 [2024-11-26 20:38:59.102141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.026 [2024-11-26 20:38:59.158586] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:59.026 20:38:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.026 20:38:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:59.026 20:38:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:59.026 20:38:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:59.284 20:38:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:59.284 20:38:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.284 20:38:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:59.284 20:38:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.284 20:38:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:59.284 20:38:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:59.543 nvme0n1 00:17:59.839 20:38:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:59.839 20:38:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.839 20:38:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:59.839 20:38:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.839 20:38:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:59.839 20:38:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:59.839 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:59.839 Zero copy mechanism will not be used. 00:17:59.839 Running I/O for 2 seconds... 00:17:59.839 [2024-11-26 20:39:00.034777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.839 [2024-11-26 20:39:00.035095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.839 [2024-11-26 20:39:00.035117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.839 [2024-11-26 20:39:00.039688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.839 [2024-11-26 20:39:00.039732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.839 [2024-11-26 20:39:00.039748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.839 [2024-11-26 20:39:00.044200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.839 [2024-11-26 20:39:00.044257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.839 [2024-11-26 20:39:00.044273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.839 [2024-11-26 20:39:00.048808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.839 [2024-11-26 20:39:00.048846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.839 [2024-11-26 20:39:00.048860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.839 [2024-11-26 20:39:00.053514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.839 [2024-11-26 20:39:00.053705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.839 [2024-11-26 20:39:00.053728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.839 [2024-11-26 20:39:00.058079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.839 [2024-11-26 20:39:00.058120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.839 [2024-11-26 20:39:00.058143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.839 [2024-11-26 20:39:00.062712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.839 [2024-11-26 20:39:00.062753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.839 [2024-11-26 20:39:00.062768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.839 [2024-11-26 20:39:00.067231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.839 [2024-11-26 20:39:00.067289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.839 [2024-11-26 20:39:00.067304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.839 [2024-11-26 20:39:00.071800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.839 [2024-11-26 20:39:00.071840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.840 [2024-11-26 20:39:00.071856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.840 [2024-11-26 20:39:00.076352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.840 [2024-11-26 20:39:00.076391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.840 [2024-11-26 20:39:00.076406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.840 [2024-11-26 20:39:00.080787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.840 [2024-11-26 20:39:00.080826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.840 [2024-11-26 20:39:00.080840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.840 [2024-11-26 20:39:00.085262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.840 [2024-11-26 20:39:00.085311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.840 [2024-11-26 20:39:00.085326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.840 [2024-11-26 20:39:00.089786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.840 [2024-11-26 20:39:00.089826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.840 [2024-11-26 20:39:00.089841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.840 [2024-11-26 20:39:00.094177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.840 [2024-11-26 20:39:00.094250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.840 [2024-11-26 20:39:00.094270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.840 [2024-11-26 20:39:00.098701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.840 [2024-11-26 20:39:00.098740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.840 [2024-11-26 20:39:00.098755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.840 [2024-11-26 20:39:00.103246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.840 [2024-11-26 20:39:00.103302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.840 [2024-11-26 20:39:00.103318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.840 [2024-11-26 20:39:00.107758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.840 [2024-11-26 20:39:00.108079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.840 [2024-11-26 20:39:00.108098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.840 [2024-11-26 20:39:00.112671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.840 [2024-11-26 20:39:00.112710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.840 [2024-11-26 20:39:00.112725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.840 [2024-11-26 20:39:00.117145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.840 [2024-11-26 20:39:00.117200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.840 [2024-11-26 20:39:00.117215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.840 [2024-11-26 20:39:00.121613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.840 [2024-11-26 20:39:00.121651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.840 [2024-11-26 20:39:00.121665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.840 [2024-11-26 20:39:00.126033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.840 [2024-11-26 20:39:00.126071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.840 [2024-11-26 20:39:00.126085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.840 [2024-11-26 20:39:00.130415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.840 [2024-11-26 20:39:00.130451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.840 [2024-11-26 20:39:00.130464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.840 [2024-11-26 20:39:00.134609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.840 [2024-11-26 20:39:00.134645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.840 [2024-11-26 20:39:00.134659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.840 [2024-11-26 20:39:00.139177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.840 [2024-11-26 20:39:00.139235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.840 [2024-11-26 20:39:00.139253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.840 [2024-11-26 20:39:00.143604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.840 [2024-11-26 20:39:00.143644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.840 [2024-11-26 20:39:00.143659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.840 [2024-11-26 20:39:00.148017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.840 [2024-11-26 20:39:00.148068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.840 [2024-11-26 20:39:00.148082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.840 [2024-11-26 20:39:00.152554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.840 [2024-11-26 20:39:00.152601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.840 [2024-11-26 20:39:00.152622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.840 [2024-11-26 20:39:00.157047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.840 [2024-11-26 20:39:00.157088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.840 [2024-11-26 20:39:00.157102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.840 [2024-11-26 20:39:00.161663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.840 [2024-11-26 20:39:00.161961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.840 [2024-11-26 20:39:00.161996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.840 [2024-11-26 20:39:00.166495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.840 [2024-11-26 20:39:00.166534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.841 [2024-11-26 20:39:00.166547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.841 [2024-11-26 20:39:00.170738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.841 [2024-11-26 20:39:00.170776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.841 [2024-11-26 20:39:00.170790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.841 [2024-11-26 20:39:00.174987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.841 [2024-11-26 20:39:00.175024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.841 [2024-11-26 20:39:00.175038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.841 [2024-11-26 20:39:00.179280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.841 [2024-11-26 20:39:00.179317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.841 [2024-11-26 20:39:00.179330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.841 [2024-11-26 20:39:00.183364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.841 [2024-11-26 20:39:00.183400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.841 [2024-11-26 20:39:00.183414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.841 [2024-11-26 20:39:00.187662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:17:59.841 [2024-11-26 20:39:00.187700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.841 [2024-11-26 20:39:00.187715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.101 [2024-11-26 20:39:00.191834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.101 [2024-11-26 20:39:00.191888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.101 [2024-11-26 20:39:00.191902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.101 [2024-11-26 20:39:00.196006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.101 [2024-11-26 20:39:00.196045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.101 [2024-11-26 20:39:00.196059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.101 [2024-11-26 20:39:00.200291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.101 [2024-11-26 20:39:00.200344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.101 [2024-11-26 20:39:00.200377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.101 [2024-11-26 20:39:00.204434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.101 [2024-11-26 20:39:00.204470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.101 [2024-11-26 20:39:00.204483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.101 [2024-11-26 20:39:00.208742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.101 [2024-11-26 20:39:00.208790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.101 [2024-11-26 20:39:00.208804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.101 [2024-11-26 20:39:00.212981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.101 [2024-11-26 20:39:00.213018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.101 [2024-11-26 20:39:00.213032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.101 [2024-11-26 20:39:00.217140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.101 [2024-11-26 20:39:00.217178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.101 [2024-11-26 20:39:00.217191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.101 [2024-11-26 20:39:00.221344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.101 [2024-11-26 20:39:00.221381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.101 [2024-11-26 20:39:00.221394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.101 [2024-11-26 20:39:00.225690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.101 [2024-11-26 20:39:00.225728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.101 [2024-11-26 20:39:00.225741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.101 [2024-11-26 20:39:00.229993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.101 [2024-11-26 20:39:00.230032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.101 [2024-11-26 20:39:00.230046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.101 [2024-11-26 20:39:00.234422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.101 [2024-11-26 20:39:00.234459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.101 [2024-11-26 20:39:00.234472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.101 [2024-11-26 20:39:00.238639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.101 [2024-11-26 20:39:00.238676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.101 [2024-11-26 20:39:00.238689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.101 [2024-11-26 20:39:00.243105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.101 [2024-11-26 20:39:00.243143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.101 [2024-11-26 20:39:00.243157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.101 [2024-11-26 20:39:00.247442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.101 [2024-11-26 20:39:00.247730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.101 [2024-11-26 20:39:00.247750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.101 [2024-11-26 20:39:00.252151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.102 [2024-11-26 20:39:00.252190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.102 [2024-11-26 20:39:00.252204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.102 [2024-11-26 20:39:00.256382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.102 [2024-11-26 20:39:00.256419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.102 [2024-11-26 20:39:00.256433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.102 [2024-11-26 20:39:00.260655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.102 [2024-11-26 20:39:00.260693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.102 [2024-11-26 20:39:00.260706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.102 [2024-11-26 20:39:00.265151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.102 [2024-11-26 20:39:00.265189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.102 [2024-11-26 20:39:00.265203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.102 [2024-11-26 20:39:00.269513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.102 [2024-11-26 20:39:00.269555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.102 [2024-11-26 20:39:00.269570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.102 [2024-11-26 20:39:00.274016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.102 [2024-11-26 20:39:00.274057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.102 [2024-11-26 20:39:00.274087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.102 [2024-11-26 20:39:00.278567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.102 [2024-11-26 20:39:00.278871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.102 [2024-11-26 20:39:00.278890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.102 [2024-11-26 20:39:00.283637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.102 [2024-11-26 20:39:00.283818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.102 [2024-11-26 20:39:00.283950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.102 [2024-11-26 20:39:00.288644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.102 [2024-11-26 20:39:00.288827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.102 [2024-11-26 20:39:00.289014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.102 [2024-11-26 20:39:00.293559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.102 [2024-11-26 20:39:00.293768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.102 [2024-11-26 20:39:00.293898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.102 [2024-11-26 20:39:00.298483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.102 [2024-11-26 20:39:00.298706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.102 [2024-11-26 20:39:00.298842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.102 [2024-11-26 20:39:00.303426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.102 [2024-11-26 20:39:00.303630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.102 [2024-11-26 20:39:00.303765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.102 [2024-11-26 20:39:00.308516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.102 [2024-11-26 20:39:00.308726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.102 [2024-11-26 20:39:00.308858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.102 [2024-11-26 20:39:00.313438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.102 [2024-11-26 20:39:00.313632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.102 [2024-11-26 20:39:00.313763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.102 [2024-11-26 20:39:00.318417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.102 [2024-11-26 20:39:00.318597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.102 [2024-11-26 20:39:00.318732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.102 [2024-11-26 20:39:00.323076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.102 [2024-11-26 20:39:00.323131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.102 [2024-11-26 20:39:00.323146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.102 [2024-11-26 20:39:00.327681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.102 [2024-11-26 20:39:00.327846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.102 [2024-11-26 20:39:00.327864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.102 [2024-11-26 20:39:00.332294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.102 [2024-11-26 20:39:00.332333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.102 [2024-11-26 20:39:00.332347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.102 [2024-11-26 20:39:00.336743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.102 [2024-11-26 20:39:00.336785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.102 [2024-11-26 20:39:00.336807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.102 [2024-11-26 20:39:00.341438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.102 [2024-11-26 20:39:00.341478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.102 [2024-11-26 20:39:00.341492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.102 [2024-11-26 20:39:00.346196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.102 [2024-11-26 20:39:00.346282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.102 [2024-11-26 20:39:00.346299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.102 [2024-11-26 20:39:00.351042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.102 [2024-11-26 20:39:00.351091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.102 [2024-11-26 20:39:00.351120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.102 [2024-11-26 20:39:00.355603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.103 [2024-11-26 20:39:00.355648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.103 [2024-11-26 20:39:00.355663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.103 [2024-11-26 20:39:00.360174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.103 [2024-11-26 20:39:00.360260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.103 [2024-11-26 20:39:00.360277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.103 [2024-11-26 20:39:00.364583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.103 [2024-11-26 20:39:00.364647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.103 [2024-11-26 20:39:00.364662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.103 [2024-11-26 20:39:00.369013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.103 [2024-11-26 20:39:00.369053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.103 [2024-11-26 20:39:00.369067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.103 [2024-11-26 20:39:00.373442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.103 [2024-11-26 20:39:00.373479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.103 [2024-11-26 20:39:00.373493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.103 [2024-11-26 20:39:00.377925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.103 [2024-11-26 20:39:00.377964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.103 [2024-11-26 20:39:00.377978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.103 [2024-11-26 20:39:00.382292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.103 [2024-11-26 20:39:00.382328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.103 [2024-11-26 20:39:00.382348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.103 [2024-11-26 20:39:00.386521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.103 [2024-11-26 20:39:00.386559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.103 [2024-11-26 20:39:00.386573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.103 [2024-11-26 20:39:00.390764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.103 [2024-11-26 20:39:00.390802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.103 [2024-11-26 20:39:00.390816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.103 [2024-11-26 20:39:00.395076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.103 [2024-11-26 20:39:00.395114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.103 [2024-11-26 20:39:00.395128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.103 [2024-11-26 20:39:00.399507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.103 [2024-11-26 20:39:00.399545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.103 [2024-11-26 20:39:00.399560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.103 [2024-11-26 20:39:00.403811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.103 [2024-11-26 20:39:00.403851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.103 [2024-11-26 20:39:00.403880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.103 [2024-11-26 20:39:00.408432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.103 [2024-11-26 20:39:00.408470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.103 [2024-11-26 20:39:00.408484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.103 [2024-11-26 20:39:00.412843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.103 [2024-11-26 20:39:00.412881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.103 [2024-11-26 20:39:00.412895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.103 [2024-11-26 20:39:00.417246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.103 [2024-11-26 20:39:00.417282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.103 [2024-11-26 20:39:00.417296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.103 [2024-11-26 20:39:00.421419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.103 [2024-11-26 20:39:00.421456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.103 [2024-11-26 20:39:00.421470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.103 [2024-11-26 20:39:00.425725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.103 [2024-11-26 20:39:00.425762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.103 [2024-11-26 20:39:00.425776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.103 [2024-11-26 20:39:00.430190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.103 [2024-11-26 20:39:00.430243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.103 [2024-11-26 20:39:00.430276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.103 [2024-11-26 20:39:00.434586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.103 [2024-11-26 20:39:00.434622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.103 [2024-11-26 20:39:00.434636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.103 [2024-11-26 20:39:00.438749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.103 [2024-11-26 20:39:00.438786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.103 [2024-11-26 20:39:00.438805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.103 [2024-11-26 20:39:00.442985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.103 [2024-11-26 20:39:00.443022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.103 [2024-11-26 20:39:00.443036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.103 [2024-11-26 20:39:00.447299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.104 [2024-11-26 20:39:00.447335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.104 [2024-11-26 20:39:00.447349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.104 [2024-11-26 20:39:00.451679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.104 [2024-11-26 20:39:00.451719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.104 [2024-11-26 20:39:00.451734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.364 [2024-11-26 20:39:00.456057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.364 [2024-11-26 20:39:00.456110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.364 [2024-11-26 20:39:00.456124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.364 [2024-11-26 20:39:00.460442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.364 [2024-11-26 20:39:00.460477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.364 [2024-11-26 20:39:00.460490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.364 [2024-11-26 20:39:00.464601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.364 [2024-11-26 20:39:00.464637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.364 [2024-11-26 20:39:00.464650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.364 [2024-11-26 20:39:00.468840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.364 [2024-11-26 20:39:00.468876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.364 [2024-11-26 20:39:00.468889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.364 [2024-11-26 20:39:00.473213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.364 [2024-11-26 20:39:00.473269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.364 [2024-11-26 20:39:00.473284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.364 [2024-11-26 20:39:00.477869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.364 [2024-11-26 20:39:00.477910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.364 [2024-11-26 20:39:00.477924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.364 [2024-11-26 20:39:00.482596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.364 [2024-11-26 20:39:00.482934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.364 [2024-11-26 20:39:00.482953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.364 [2024-11-26 20:39:00.487975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.364 [2024-11-26 20:39:00.488020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.364 [2024-11-26 20:39:00.488063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.364 [2024-11-26 20:39:00.492756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.364 [2024-11-26 20:39:00.492797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.364 [2024-11-26 20:39:00.492812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.364 [2024-11-26 20:39:00.497481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.364 [2024-11-26 20:39:00.497811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.364 [2024-11-26 20:39:00.497831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.364 [2024-11-26 20:39:00.502553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.364 [2024-11-26 20:39:00.502590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.364 [2024-11-26 20:39:00.502618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.364 [2024-11-26 20:39:00.507320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.364 [2024-11-26 20:39:00.507389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.364 [2024-11-26 20:39:00.507402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.511739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.511778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.511792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.516149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.516186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.516207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.520388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.520424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.520438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.524552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.524588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.524601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.528657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.528694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.528707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.532775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.532812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.532826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.536853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.536890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.536920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.541134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.541170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.541184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.545426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.545462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.545476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.549596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.549632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.549646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.553858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.553895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.553909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.558206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.558272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.558286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.562585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.562621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.562634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.566794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.566830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.566843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.571243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.571310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.571340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.575487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.575524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.575538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.579692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.579730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.579744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.583879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.583915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.583944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.588396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.588435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.588450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.592801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.592838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.592852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.597586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.597624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.597637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.602078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.602122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.602136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.606636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.606923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.606944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.611463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.611501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.611516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.615832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.615873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.615887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.620274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.620310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.620324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.624673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.365 [2024-11-26 20:39:00.624712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.365 [2024-11-26 20:39:00.624727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.365 [2024-11-26 20:39:00.629169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.366 [2024-11-26 20:39:00.629209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.366 [2024-11-26 20:39:00.629246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.366 [2024-11-26 20:39:00.633552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.366 [2024-11-26 20:39:00.633750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.366 [2024-11-26 20:39:00.633768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.366 [2024-11-26 20:39:00.638096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.366 [2024-11-26 20:39:00.638136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.366 [2024-11-26 20:39:00.638152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.366 [2024-11-26 20:39:00.642514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.366 [2024-11-26 20:39:00.642553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.366 [2024-11-26 20:39:00.642579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.366 [2024-11-26 20:39:00.646864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.366 [2024-11-26 20:39:00.646904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.366 [2024-11-26 20:39:00.646918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.366 [2024-11-26 20:39:00.651193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.366 [2024-11-26 20:39:00.651246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.366 [2024-11-26 20:39:00.651262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.366 [2024-11-26 20:39:00.655610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.366 [2024-11-26 20:39:00.655650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.366 [2024-11-26 20:39:00.655665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.366 [2024-11-26 20:39:00.660099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.366 [2024-11-26 20:39:00.660137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.366 [2024-11-26 20:39:00.660152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.366 [2024-11-26 20:39:00.664500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.366 [2024-11-26 20:39:00.664698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.366 [2024-11-26 20:39:00.664716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.366 [2024-11-26 20:39:00.669172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.366 [2024-11-26 20:39:00.669235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.366 [2024-11-26 20:39:00.669250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.366 [2024-11-26 20:39:00.673730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.366 [2024-11-26 20:39:00.673770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.366 [2024-11-26 20:39:00.673784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.366 [2024-11-26 20:39:00.678303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.366 [2024-11-26 20:39:00.678336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.366 [2024-11-26 20:39:00.678350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.366 [2024-11-26 20:39:00.682664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.366 [2024-11-26 20:39:00.682702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.366 [2024-11-26 20:39:00.682718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.366 [2024-11-26 20:39:00.687091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.366 [2024-11-26 20:39:00.687130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.366 [2024-11-26 20:39:00.687151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.366 [2024-11-26 20:39:00.691625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.366 [2024-11-26 20:39:00.691805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.366 [2024-11-26 20:39:00.691824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.366 [2024-11-26 20:39:00.696167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.366 [2024-11-26 20:39:00.696208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.366 [2024-11-26 20:39:00.696234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.366 [2024-11-26 20:39:00.700661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.366 [2024-11-26 20:39:00.700700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.366 [2024-11-26 20:39:00.700720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.366 [2024-11-26 20:39:00.705094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.366 [2024-11-26 20:39:00.705132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.366 [2024-11-26 20:39:00.705146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.366 [2024-11-26 20:39:00.709575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.366 [2024-11-26 20:39:00.709612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.366 [2024-11-26 20:39:00.709626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.366 [2024-11-26 20:39:00.713954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.366 [2024-11-26 20:39:00.713992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.366 [2024-11-26 20:39:00.714017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.629 [2024-11-26 20:39:00.718453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.629 [2024-11-26 20:39:00.718491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.629 [2024-11-26 20:39:00.718506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.629 [2024-11-26 20:39:00.722840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.629 [2024-11-26 20:39:00.722878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.629 [2024-11-26 20:39:00.722892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.629 [2024-11-26 20:39:00.727212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.629 [2024-11-26 20:39:00.727284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.629 [2024-11-26 20:39:00.727299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.629 [2024-11-26 20:39:00.731594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.629 [2024-11-26 20:39:00.731632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.629 [2024-11-26 20:39:00.731648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.629 [2024-11-26 20:39:00.736063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.629 [2024-11-26 20:39:00.736103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.629 [2024-11-26 20:39:00.736122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.629 [2024-11-26 20:39:00.740336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.629 [2024-11-26 20:39:00.740374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.629 [2024-11-26 20:39:00.740388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.629 [2024-11-26 20:39:00.744639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.629 [2024-11-26 20:39:00.744678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.629 [2024-11-26 20:39:00.744698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.629 [2024-11-26 20:39:00.749121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.629 [2024-11-26 20:39:00.749160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.629 [2024-11-26 20:39:00.749175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.629 [2024-11-26 20:39:00.753403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.629 [2024-11-26 20:39:00.753595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.629 [2024-11-26 20:39:00.753621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.629 [2024-11-26 20:39:00.757841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.629 [2024-11-26 20:39:00.757882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.629 [2024-11-26 20:39:00.757899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.629 [2024-11-26 20:39:00.762194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.629 [2024-11-26 20:39:00.762248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.629 [2024-11-26 20:39:00.762270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.630 [2024-11-26 20:39:00.766702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.630 [2024-11-26 20:39:00.766743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.630 [2024-11-26 20:39:00.766758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.630 [2024-11-26 20:39:00.771167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.630 [2024-11-26 20:39:00.771207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.630 [2024-11-26 20:39:00.771248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.630 [2024-11-26 20:39:00.775523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.630 [2024-11-26 20:39:00.775573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.630 [2024-11-26 20:39:00.775589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.630 [2024-11-26 20:39:00.779880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.630 [2024-11-26 20:39:00.779920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.630 [2024-11-26 20:39:00.779934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.630 [2024-11-26 20:39:00.784215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.630 [2024-11-26 20:39:00.784269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.630 [2024-11-26 20:39:00.784286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.630 [2024-11-26 20:39:00.788665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.630 [2024-11-26 20:39:00.788864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.630 [2024-11-26 20:39:00.788892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.630 [2024-11-26 20:39:00.793309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.630 [2024-11-26 20:39:00.793347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.630 [2024-11-26 20:39:00.793368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.630 [2024-11-26 20:39:00.797875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.630 [2024-11-26 20:39:00.797913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.630 [2024-11-26 20:39:00.797933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.630 [2024-11-26 20:39:00.802296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.630 [2024-11-26 20:39:00.802333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.630 [2024-11-26 20:39:00.802347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.630 [2024-11-26 20:39:00.806506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.630 [2024-11-26 20:39:00.806544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.630 [2024-11-26 20:39:00.806564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.630 [2024-11-26 20:39:00.810736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.630 [2024-11-26 20:39:00.810774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.630 [2024-11-26 20:39:00.810788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.630 [2024-11-26 20:39:00.814939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.630 [2024-11-26 20:39:00.814976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.630 [2024-11-26 20:39:00.814997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.630 [2024-11-26 20:39:00.819192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.630 [2024-11-26 20:39:00.819241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.630 [2024-11-26 20:39:00.819256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.630 [2024-11-26 20:39:00.823474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.630 [2024-11-26 20:39:00.823511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.630 [2024-11-26 20:39:00.823525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.630 [2024-11-26 20:39:00.827778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.630 [2024-11-26 20:39:00.827815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.630 [2024-11-26 20:39:00.827829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.630 [2024-11-26 20:39:00.831954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.630 [2024-11-26 20:39:00.831994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.630 [2024-11-26 20:39:00.832009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.630 [2024-11-26 20:39:00.836393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.630 [2024-11-26 20:39:00.836430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.630 [2024-11-26 20:39:00.836445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.630 [2024-11-26 20:39:00.840782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.630 [2024-11-26 20:39:00.840821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.630 [2024-11-26 20:39:00.840835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.630 [2024-11-26 20:39:00.845150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.630 [2024-11-26 20:39:00.845189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.630 [2024-11-26 20:39:00.845203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.630 [2024-11-26 20:39:00.849528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.630 [2024-11-26 20:39:00.849565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.630 [2024-11-26 20:39:00.849579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.630 [2024-11-26 20:39:00.853715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.630 [2024-11-26 20:39:00.853752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.630 [2024-11-26 20:39:00.853772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.630 [2024-11-26 20:39:00.858113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.630 [2024-11-26 20:39:00.858151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.631 [2024-11-26 20:39:00.858165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.631 [2024-11-26 20:39:00.862400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.631 [2024-11-26 20:39:00.862436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.631 [2024-11-26 20:39:00.862450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.631 [2024-11-26 20:39:00.866557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.631 [2024-11-26 20:39:00.866593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.631 [2024-11-26 20:39:00.866606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.631 [2024-11-26 20:39:00.870638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.631 [2024-11-26 20:39:00.870675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.631 [2024-11-26 20:39:00.870689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.631 [2024-11-26 20:39:00.875132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.631 [2024-11-26 20:39:00.875172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.631 [2024-11-26 20:39:00.875192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.631 [2024-11-26 20:39:00.879763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.631 [2024-11-26 20:39:00.879803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.631 [2024-11-26 20:39:00.879819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.631 [2024-11-26 20:39:00.884439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.631 [2024-11-26 20:39:00.884477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.631 [2024-11-26 20:39:00.884507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.631 [2024-11-26 20:39:00.889054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.631 [2024-11-26 20:39:00.889094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.631 [2024-11-26 20:39:00.889109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.631 [2024-11-26 20:39:00.893758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.631 [2024-11-26 20:39:00.893987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.631 [2024-11-26 20:39:00.894019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.631 [2024-11-26 20:39:00.898452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.631 [2024-11-26 20:39:00.898491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.631 [2024-11-26 20:39:00.898506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.631 [2024-11-26 20:39:00.902847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.631 [2024-11-26 20:39:00.902886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.631 [2024-11-26 20:39:00.902900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.631 [2024-11-26 20:39:00.907457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.631 [2024-11-26 20:39:00.907494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.631 [2024-11-26 20:39:00.907508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.631 [2024-11-26 20:39:00.911935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.631 [2024-11-26 20:39:00.911972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.631 [2024-11-26 20:39:00.911986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.631 [2024-11-26 20:39:00.916496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.631 [2024-11-26 20:39:00.916699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.631 [2024-11-26 20:39:00.916717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.631 [2024-11-26 20:39:00.921240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.631 [2024-11-26 20:39:00.921295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.631 [2024-11-26 20:39:00.921312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.631 [2024-11-26 20:39:00.925821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.631 [2024-11-26 20:39:00.925858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.631 [2024-11-26 20:39:00.925872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.631 [2024-11-26 20:39:00.930368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.631 [2024-11-26 20:39:00.930420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.631 [2024-11-26 20:39:00.930434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.631 [2024-11-26 20:39:00.934777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.631 [2024-11-26 20:39:00.934816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.631 [2024-11-26 20:39:00.934830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.631 [2024-11-26 20:39:00.939421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.631 [2024-11-26 20:39:00.939460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.631 [2024-11-26 20:39:00.939475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.631 [2024-11-26 20:39:00.944013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.631 [2024-11-26 20:39:00.944055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.631 [2024-11-26 20:39:00.944077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.631 [2024-11-26 20:39:00.948558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.631 [2024-11-26 20:39:00.948596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.631 [2024-11-26 20:39:00.948610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.631 [2024-11-26 20:39:00.952965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.632 [2024-11-26 20:39:00.953003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.632 [2024-11-26 20:39:00.953017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.632 [2024-11-26 20:39:00.957429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.632 [2024-11-26 20:39:00.957467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.632 [2024-11-26 20:39:00.957488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.632 [2024-11-26 20:39:00.961879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.632 [2024-11-26 20:39:00.961916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.632 [2024-11-26 20:39:00.961930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.632 [2024-11-26 20:39:00.966413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.632 [2024-11-26 20:39:00.966480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.632 [2024-11-26 20:39:00.966493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.632 [2024-11-26 20:39:00.970917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.632 [2024-11-26 20:39:00.970955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.632 [2024-11-26 20:39:00.970969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.632 [2024-11-26 20:39:00.975668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.632 [2024-11-26 20:39:00.975709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.632 [2024-11-26 20:39:00.975724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.632 [2024-11-26 20:39:00.980196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.632 [2024-11-26 20:39:00.980254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.632 [2024-11-26 20:39:00.980271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.893 [2024-11-26 20:39:00.984671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.893 [2024-11-26 20:39:00.984709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.893 [2024-11-26 20:39:00.984723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.893 [2024-11-26 20:39:00.989142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.893 [2024-11-26 20:39:00.989181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.893 [2024-11-26 20:39:00.989195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.893 [2024-11-26 20:39:00.993643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.893 [2024-11-26 20:39:00.993852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.893 [2024-11-26 20:39:00.993870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.893 [2024-11-26 20:39:00.998203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.893 [2024-11-26 20:39:00.998250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.893 [2024-11-26 20:39:00.998264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.893 [2024-11-26 20:39:01.002580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.893 [2024-11-26 20:39:01.002616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.893 [2024-11-26 20:39:01.002630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.893 [2024-11-26 20:39:01.007165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.893 [2024-11-26 20:39:01.007205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.893 [2024-11-26 20:39:01.007236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.893 [2024-11-26 20:39:01.011836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.893 [2024-11-26 20:39:01.011876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.893 [2024-11-26 20:39:01.011903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.893 [2024-11-26 20:39:01.016348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.893 [2024-11-26 20:39:01.016387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.893 [2024-11-26 20:39:01.016402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.893 [2024-11-26 20:39:01.020884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.893 [2024-11-26 20:39:01.020921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.893 [2024-11-26 20:39:01.020935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.893 [2024-11-26 20:39:01.025457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.893 [2024-11-26 20:39:01.025493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.893 [2024-11-26 20:39:01.025514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.893 6897.00 IOPS, 862.12 MiB/s [2024-11-26T20:39:01.248Z] [2024-11-26 20:39:01.030840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.893 [2024-11-26 20:39:01.031025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.893 [2024-11-26 20:39:01.031169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.893 [2024-11-26 20:39:01.035672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.893 [2024-11-26 20:39:01.035868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.893 [2024-11-26 20:39:01.036023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.893 [2024-11-26 20:39:01.040348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.893 [2024-11-26 20:39:01.040521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.893 [2024-11-26 20:39:01.040539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.893 [2024-11-26 20:39:01.044905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.893 [2024-11-26 20:39:01.044945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.893 [2024-11-26 20:39:01.044960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.893 [2024-11-26 20:39:01.049350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.893 [2024-11-26 20:39:01.049386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.893 [2024-11-26 20:39:01.049400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.893 [2024-11-26 20:39:01.053773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.893 [2024-11-26 20:39:01.053810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.893 [2024-11-26 20:39:01.053824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.893 [2024-11-26 20:39:01.058406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.893 [2024-11-26 20:39:01.058457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.893 [2024-11-26 20:39:01.058472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.893 [2024-11-26 20:39:01.062866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.893 [2024-11-26 20:39:01.062904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.893 [2024-11-26 20:39:01.062917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.894 [2024-11-26 20:39:01.067291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.894 [2024-11-26 20:39:01.067343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.894 [2024-11-26 20:39:01.067358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.894 [2024-11-26 20:39:01.071769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.894 [2024-11-26 20:39:01.071809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.894 [2024-11-26 20:39:01.071824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.894 [2024-11-26 20:39:01.076308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.894 [2024-11-26 20:39:01.076345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.894 [2024-11-26 20:39:01.076386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.894 [2024-11-26 20:39:01.080964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.894 [2024-11-26 20:39:01.081003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.894 [2024-11-26 20:39:01.081018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.894 [2024-11-26 20:39:01.085639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.894 [2024-11-26 20:39:01.085676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.894 [2024-11-26 20:39:01.085690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.894 [2024-11-26 20:39:01.090202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.894 [2024-11-26 20:39:01.090268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.894 [2024-11-26 20:39:01.090301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.894 [2024-11-26 20:39:01.094560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.894 [2024-11-26 20:39:01.094598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.894 [2024-11-26 20:39:01.094611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.894 [2024-11-26 20:39:01.098845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.894 [2024-11-26 20:39:01.098883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.894 [2024-11-26 20:39:01.098896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.894 [2024-11-26 20:39:01.103209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.894 [2024-11-26 20:39:01.103277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.894 [2024-11-26 20:39:01.103292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.894 [2024-11-26 20:39:01.107754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.894 [2024-11-26 20:39:01.107793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.894 [2024-11-26 20:39:01.107808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.894 [2024-11-26 20:39:01.112239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.894 [2024-11-26 20:39:01.112291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.894 [2024-11-26 20:39:01.112308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.894 [2024-11-26 20:39:01.116714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.894 [2024-11-26 20:39:01.116906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.894 [2024-11-26 20:39:01.116923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.894 [2024-11-26 20:39:01.121476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.894 [2024-11-26 20:39:01.121514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.894 [2024-11-26 20:39:01.121534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.894 [2024-11-26 20:39:01.126010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.894 [2024-11-26 20:39:01.126080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.894 [2024-11-26 20:39:01.126095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.894 [2024-11-26 20:39:01.130655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.894 [2024-11-26 20:39:01.130693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.894 [2024-11-26 20:39:01.130707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.894 [2024-11-26 20:39:01.135193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.894 [2024-11-26 20:39:01.135244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.894 [2024-11-26 20:39:01.135259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.894 [2024-11-26 20:39:01.139708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.894 [2024-11-26 20:39:01.139884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.894 [2024-11-26 20:39:01.139903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.894 [2024-11-26 20:39:01.144476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.894 [2024-11-26 20:39:01.144515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.894 [2024-11-26 20:39:01.144544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.894 [2024-11-26 20:39:01.148968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.894 [2024-11-26 20:39:01.149004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.894 [2024-11-26 20:39:01.149018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.894 [2024-11-26 20:39:01.153590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.894 [2024-11-26 20:39:01.153643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.894 [2024-11-26 20:39:01.153656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.894 [2024-11-26 20:39:01.158058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.894 [2024-11-26 20:39:01.158095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.895 [2024-11-26 20:39:01.158109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.895 [2024-11-26 20:39:01.162795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.895 [2024-11-26 20:39:01.162994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.895 [2024-11-26 20:39:01.163012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.895 [2024-11-26 20:39:01.167517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.895 [2024-11-26 20:39:01.167558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.895 [2024-11-26 20:39:01.167584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.895 [2024-11-26 20:39:01.172095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.895 [2024-11-26 20:39:01.172135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.895 [2024-11-26 20:39:01.172150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.895 [2024-11-26 20:39:01.176668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.895 [2024-11-26 20:39:01.176722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.895 [2024-11-26 20:39:01.176736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.895 [2024-11-26 20:39:01.181260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.895 [2024-11-26 20:39:01.181316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.895 [2024-11-26 20:39:01.181332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.895 [2024-11-26 20:39:01.185886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.895 [2024-11-26 20:39:01.186080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.895 [2024-11-26 20:39:01.186098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.895 [2024-11-26 20:39:01.190671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.895 [2024-11-26 20:39:01.190712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.895 [2024-11-26 20:39:01.190726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.895 [2024-11-26 20:39:01.195158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.895 [2024-11-26 20:39:01.195199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.895 [2024-11-26 20:39:01.195214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.895 [2024-11-26 20:39:01.199487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.895 [2024-11-26 20:39:01.199526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.895 [2024-11-26 20:39:01.199542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.895 [2024-11-26 20:39:01.203991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.895 [2024-11-26 20:39:01.204030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.895 [2024-11-26 20:39:01.204044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.895 [2024-11-26 20:39:01.208471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.895 [2024-11-26 20:39:01.208509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.895 [2024-11-26 20:39:01.208525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.895 [2024-11-26 20:39:01.212948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.895 [2024-11-26 20:39:01.212987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.895 [2024-11-26 20:39:01.213012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.895 [2024-11-26 20:39:01.217381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.895 [2024-11-26 20:39:01.217419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.895 [2024-11-26 20:39:01.217435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.895 [2024-11-26 20:39:01.221822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.895 [2024-11-26 20:39:01.221860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.895 [2024-11-26 20:39:01.221874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.895 [2024-11-26 20:39:01.226244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.895 [2024-11-26 20:39:01.226305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.895 [2024-11-26 20:39:01.226319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:00.895 [2024-11-26 20:39:01.230745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.895 [2024-11-26 20:39:01.230787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.895 [2024-11-26 20:39:01.230803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.895 [2024-11-26 20:39:01.235140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.895 [2024-11-26 20:39:01.235179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.895 [2024-11-26 20:39:01.235193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:00.895 [2024-11-26 20:39:01.239618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.896 [2024-11-26 20:39:01.239803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.896 [2024-11-26 20:39:01.239823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.896 [2024-11-26 20:39:01.244381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:00.896 [2024-11-26 20:39:01.244436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.896 [2024-11-26 20:39:01.244450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.157 [2024-11-26 20:39:01.248724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.157 [2024-11-26 20:39:01.248762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.157 [2024-11-26 20:39:01.248776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.157 [2024-11-26 20:39:01.253160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.157 [2024-11-26 20:39:01.253199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.157 [2024-11-26 20:39:01.253213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.157 [2024-11-26 20:39:01.257618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.157 [2024-11-26 20:39:01.257658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.157 [2024-11-26 20:39:01.257672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.157 [2024-11-26 20:39:01.262024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.157 [2024-11-26 20:39:01.262095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.157 [2024-11-26 20:39:01.262110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.157 [2024-11-26 20:39:01.266612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.157 [2024-11-26 20:39:01.266649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.157 [2024-11-26 20:39:01.266663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.157 [2024-11-26 20:39:01.271037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.157 [2024-11-26 20:39:01.271093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.157 [2024-11-26 20:39:01.271108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.157 [2024-11-26 20:39:01.275618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.157 [2024-11-26 20:39:01.275657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.157 [2024-11-26 20:39:01.275671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.157 [2024-11-26 20:39:01.280067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.157 [2024-11-26 20:39:01.280108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.157 [2024-11-26 20:39:01.280123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.158 [2024-11-26 20:39:01.284607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.158 [2024-11-26 20:39:01.284666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.158 [2024-11-26 20:39:01.284681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.158 [2024-11-26 20:39:01.289110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.158 [2024-11-26 20:39:01.289149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.158 [2024-11-26 20:39:01.289164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.158 [2024-11-26 20:39:01.293527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.158 [2024-11-26 20:39:01.293695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.158 [2024-11-26 20:39:01.293713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.158 [2024-11-26 20:39:01.298054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.158 [2024-11-26 20:39:01.298094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.158 [2024-11-26 20:39:01.298109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.158 [2024-11-26 20:39:01.302402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.158 [2024-11-26 20:39:01.302439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.158 [2024-11-26 20:39:01.302453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.158 [2024-11-26 20:39:01.306767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.158 [2024-11-26 20:39:01.306804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.158 [2024-11-26 20:39:01.306817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.158 [2024-11-26 20:39:01.311321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.158 [2024-11-26 20:39:01.311359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.158 [2024-11-26 20:39:01.311373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.158 [2024-11-26 20:39:01.315859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.158 [2024-11-26 20:39:01.315898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.158 [2024-11-26 20:39:01.315927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.158 [2024-11-26 20:39:01.320567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.158 [2024-11-26 20:39:01.320605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.158 [2024-11-26 20:39:01.320620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.158 [2024-11-26 20:39:01.325055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.158 [2024-11-26 20:39:01.325101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.158 [2024-11-26 20:39:01.325115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.158 [2024-11-26 20:39:01.329783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.158 [2024-11-26 20:39:01.329822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.158 [2024-11-26 20:39:01.329837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.158 [2024-11-26 20:39:01.334309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.158 [2024-11-26 20:39:01.334347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.158 [2024-11-26 20:39:01.334362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.158 [2024-11-26 20:39:01.338752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.158 [2024-11-26 20:39:01.338790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.158 [2024-11-26 20:39:01.338805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.158 [2024-11-26 20:39:01.343043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.158 [2024-11-26 20:39:01.343082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.158 [2024-11-26 20:39:01.343113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.158 [2024-11-26 20:39:01.347501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.158 [2024-11-26 20:39:01.347548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.158 [2024-11-26 20:39:01.347574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.158 [2024-11-26 20:39:01.352221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.158 [2024-11-26 20:39:01.352274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.158 [2024-11-26 20:39:01.352289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.158 [2024-11-26 20:39:01.356731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.158 [2024-11-26 20:39:01.356768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.158 [2024-11-26 20:39:01.356781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.158 [2024-11-26 20:39:01.361234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.158 [2024-11-26 20:39:01.361284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.158 [2024-11-26 20:39:01.361301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.158 [2024-11-26 20:39:01.365766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.158 [2024-11-26 20:39:01.365805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.158 [2024-11-26 20:39:01.365820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.158 [2024-11-26 20:39:01.370283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.158 [2024-11-26 20:39:01.370332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.158 [2024-11-26 20:39:01.370348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.158 [2024-11-26 20:39:01.374802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.158 [2024-11-26 20:39:01.374839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.158 [2024-11-26 20:39:01.374853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.158 [2024-11-26 20:39:01.379361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.158 [2024-11-26 20:39:01.379429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.158 [2024-11-26 20:39:01.379442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.158 [2024-11-26 20:39:01.383873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.158 [2024-11-26 20:39:01.383927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.158 [2024-11-26 20:39:01.383956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.158 [2024-11-26 20:39:01.388196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.158 [2024-11-26 20:39:01.388248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.158 [2024-11-26 20:39:01.388281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.392591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.392629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.392642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.396866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.396904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.396918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.401320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.401357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.401372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.405794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.405834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.405849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.410357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.410419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.410447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.414975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.415011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.415025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.419596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.419636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.419651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.424188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.424244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.424260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.428756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.428980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.428998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.433621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.433673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.433687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.438075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.438114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.438143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.442359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.442395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.442416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.446607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.446643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.446657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.450902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.450939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.450953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.455416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.455455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.455476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.459735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.459774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.459789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.464177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.464216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.464250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.468614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.468653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.468668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.473226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.473297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.473313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.477858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.477896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.477910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.482097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.482134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.482148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.486551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.486783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.486802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.491291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.491330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.491345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.495764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.495803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.495818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.500276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.500314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.500339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.159 [2024-11-26 20:39:01.504856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.159 [2024-11-26 20:39:01.504889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.159 [2024-11-26 20:39:01.504909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.420 [2024-11-26 20:39:01.509379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.420 [2024-11-26 20:39:01.509546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.420 [2024-11-26 20:39:01.509565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.420 [2024-11-26 20:39:01.514070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.420 [2024-11-26 20:39:01.514110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.420 [2024-11-26 20:39:01.514125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.420 [2024-11-26 20:39:01.518460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.420 [2024-11-26 20:39:01.518500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.420 [2024-11-26 20:39:01.518514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.420 [2024-11-26 20:39:01.523230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.420 [2024-11-26 20:39:01.523289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.420 [2024-11-26 20:39:01.523307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.420 [2024-11-26 20:39:01.527925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.420 [2024-11-26 20:39:01.527964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.420 [2024-11-26 20:39:01.527987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.420 [2024-11-26 20:39:01.532676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.420 [2024-11-26 20:39:01.532879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.420 [2024-11-26 20:39:01.532898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.420 [2024-11-26 20:39:01.537699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.420 [2024-11-26 20:39:01.537756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.420 [2024-11-26 20:39:01.537771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.420 [2024-11-26 20:39:01.542468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.420 [2024-11-26 20:39:01.542507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.420 [2024-11-26 20:39:01.542527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.420 [2024-11-26 20:39:01.547037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.420 [2024-11-26 20:39:01.547076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.420 [2024-11-26 20:39:01.547106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.420 [2024-11-26 20:39:01.551724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.420 [2024-11-26 20:39:01.551890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.420 [2024-11-26 20:39:01.551909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.420 [2024-11-26 20:39:01.556512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.420 [2024-11-26 20:39:01.556550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.420 [2024-11-26 20:39:01.556565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.420 [2024-11-26 20:39:01.561162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.420 [2024-11-26 20:39:01.561202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.420 [2024-11-26 20:39:01.561239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.420 [2024-11-26 20:39:01.565924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.420 [2024-11-26 20:39:01.565965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.420 [2024-11-26 20:39:01.565980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.420 [2024-11-26 20:39:01.570568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.420 [2024-11-26 20:39:01.570773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.420 [2024-11-26 20:39:01.570791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.420 [2024-11-26 20:39:01.575332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.421 [2024-11-26 20:39:01.575372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.421 [2024-11-26 20:39:01.575386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.421 [2024-11-26 20:39:01.579983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.421 [2024-11-26 20:39:01.580029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.421 [2024-11-26 20:39:01.580055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.421 [2024-11-26 20:39:01.584688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.421 [2024-11-26 20:39:01.584726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.421 [2024-11-26 20:39:01.584740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.421 [2024-11-26 20:39:01.589128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.421 [2024-11-26 20:39:01.589166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.421 [2024-11-26 20:39:01.589180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.421 [2024-11-26 20:39:01.593551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.421 [2024-11-26 20:39:01.593619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.421 [2024-11-26 20:39:01.593633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.421 [2024-11-26 20:39:01.597979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.421 [2024-11-26 20:39:01.598017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.421 [2024-11-26 20:39:01.598031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.421 [2024-11-26 20:39:01.602432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.421 [2024-11-26 20:39:01.602469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.421 [2024-11-26 20:39:01.602483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.421 [2024-11-26 20:39:01.607022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.421 [2024-11-26 20:39:01.607092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.421 [2024-11-26 20:39:01.607107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.421 [2024-11-26 20:39:01.611555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.421 [2024-11-26 20:39:01.611620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.421 [2024-11-26 20:39:01.611635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.421 [2024-11-26 20:39:01.616162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.421 [2024-11-26 20:39:01.616201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.421 [2024-11-26 20:39:01.616216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.421 [2024-11-26 20:39:01.620650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.421 [2024-11-26 20:39:01.620827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.421 [2024-11-26 20:39:01.620845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.421 [2024-11-26 20:39:01.625455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.421 [2024-11-26 20:39:01.625493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.421 [2024-11-26 20:39:01.625507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.421 [2024-11-26 20:39:01.629743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.421 [2024-11-26 20:39:01.629780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.421 [2024-11-26 20:39:01.629794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.421 [2024-11-26 20:39:01.633953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.421 [2024-11-26 20:39:01.633990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.421 [2024-11-26 20:39:01.634003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.421 [2024-11-26 20:39:01.638345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.421 [2024-11-26 20:39:01.638382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.421 [2024-11-26 20:39:01.638403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.421 [2024-11-26 20:39:01.642564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.421 [2024-11-26 20:39:01.642601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.421 [2024-11-26 20:39:01.642615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.421 [2024-11-26 20:39:01.646670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.421 [2024-11-26 20:39:01.646706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.421 [2024-11-26 20:39:01.646720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.421 [2024-11-26 20:39:01.650984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.421 [2024-11-26 20:39:01.651021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.421 [2024-11-26 20:39:01.651046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.421 [2024-11-26 20:39:01.655492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.421 [2024-11-26 20:39:01.655528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.421 [2024-11-26 20:39:01.655543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.421 [2024-11-26 20:39:01.659967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.421 [2024-11-26 20:39:01.660006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.421 [2024-11-26 20:39:01.660026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.421 [2024-11-26 20:39:01.664418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.421 [2024-11-26 20:39:01.664456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.421 [2024-11-26 20:39:01.664472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.421 [2024-11-26 20:39:01.668893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.422 [2024-11-26 20:39:01.668931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.422 [2024-11-26 20:39:01.668946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.422 [2024-11-26 20:39:01.673390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.422 [2024-11-26 20:39:01.673431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.422 [2024-11-26 20:39:01.673446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.422 [2024-11-26 20:39:01.677817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.422 [2024-11-26 20:39:01.677855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.422 [2024-11-26 20:39:01.677868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.422 [2024-11-26 20:39:01.682310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.422 [2024-11-26 20:39:01.682347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.422 [2024-11-26 20:39:01.682360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.422 [2024-11-26 20:39:01.686770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.422 [2024-11-26 20:39:01.686806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.422 [2024-11-26 20:39:01.686819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.422 [2024-11-26 20:39:01.691112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.422 [2024-11-26 20:39:01.691148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.422 [2024-11-26 20:39:01.691162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.422 [2024-11-26 20:39:01.695724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.422 [2024-11-26 20:39:01.695765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.422 [2024-11-26 20:39:01.695780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.422 [2024-11-26 20:39:01.700322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.422 [2024-11-26 20:39:01.700378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.422 [2024-11-26 20:39:01.700398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.422 [2024-11-26 20:39:01.704824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.422 [2024-11-26 20:39:01.704886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.422 [2024-11-26 20:39:01.704901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.422 [2024-11-26 20:39:01.709571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.422 [2024-11-26 20:39:01.709608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.422 [2024-11-26 20:39:01.709621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.422 [2024-11-26 20:39:01.714254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.422 [2024-11-26 20:39:01.714309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.422 [2024-11-26 20:39:01.714325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.422 [2024-11-26 20:39:01.718784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.422 [2024-11-26 20:39:01.718821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.422 [2024-11-26 20:39:01.718835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.422 [2024-11-26 20:39:01.723309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.422 [2024-11-26 20:39:01.723345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.422 [2024-11-26 20:39:01.723360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.422 [2024-11-26 20:39:01.727573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.422 [2024-11-26 20:39:01.727628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.422 [2024-11-26 20:39:01.727643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.422 [2024-11-26 20:39:01.732168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.422 [2024-11-26 20:39:01.732248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.422 [2024-11-26 20:39:01.732280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.422 [2024-11-26 20:39:01.736548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.422 [2024-11-26 20:39:01.736768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.422 [2024-11-26 20:39:01.736786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.422 [2024-11-26 20:39:01.741163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.422 [2024-11-26 20:39:01.741200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.422 [2024-11-26 20:39:01.741214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.422 [2024-11-26 20:39:01.745333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.422 [2024-11-26 20:39:01.745368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.422 [2024-11-26 20:39:01.745381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.422 [2024-11-26 20:39:01.749679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.422 [2024-11-26 20:39:01.749716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.422 [2024-11-26 20:39:01.749730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.422 [2024-11-26 20:39:01.754279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.422 [2024-11-26 20:39:01.754332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.422 [2024-11-26 20:39:01.754347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.422 [2024-11-26 20:39:01.758876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.422 [2024-11-26 20:39:01.758913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.422 [2024-11-26 20:39:01.758927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.422 [2024-11-26 20:39:01.763482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.422 [2024-11-26 20:39:01.763674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.422 [2024-11-26 20:39:01.763693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.423 [2024-11-26 20:39:01.768202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.423 [2024-11-26 20:39:01.768265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.423 [2024-11-26 20:39:01.768282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.683 [2024-11-26 20:39:01.772554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.683 [2024-11-26 20:39:01.772606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.683 [2024-11-26 20:39:01.772619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.683 [2024-11-26 20:39:01.776770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.683 [2024-11-26 20:39:01.776807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.683 [2024-11-26 20:39:01.776821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.683 [2024-11-26 20:39:01.781057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.683 [2024-11-26 20:39:01.781093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.683 [2024-11-26 20:39:01.781107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.683 [2024-11-26 20:39:01.785561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.683 [2024-11-26 20:39:01.785600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.683 [2024-11-26 20:39:01.785623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.683 [2024-11-26 20:39:01.789984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.683 [2024-11-26 20:39:01.790039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.683 [2024-11-26 20:39:01.790054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.683 [2024-11-26 20:39:01.794447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.683 [2024-11-26 20:39:01.794485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.683 [2024-11-26 20:39:01.794500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.683 [2024-11-26 20:39:01.798854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.683 [2024-11-26 20:39:01.798894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.683 [2024-11-26 20:39:01.798908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.683 [2024-11-26 20:39:01.803299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.683 [2024-11-26 20:39:01.803337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.683 [2024-11-26 20:39:01.803351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.683 [2024-11-26 20:39:01.807744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.683 [2024-11-26 20:39:01.807784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.683 [2024-11-26 20:39:01.807799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.683 [2024-11-26 20:39:01.812154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.683 [2024-11-26 20:39:01.812193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.683 [2024-11-26 20:39:01.812238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.683 [2024-11-26 20:39:01.816583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.683 [2024-11-26 20:39:01.816630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.683 [2024-11-26 20:39:01.816655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.683 [2024-11-26 20:39:01.821070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.683 [2024-11-26 20:39:01.821112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.821128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.825343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.825381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.825396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.829704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.829743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.829762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.834125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.834164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.834179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.838479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.838517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.838532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.842918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.842957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.842972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.847190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.847261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.847277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.851524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.851571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.851586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.855907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.855947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.855962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.860205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.860264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.860279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.864552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.864798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.864821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.869115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.869156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.869171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.873514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.873554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.873569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.877827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.877866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.877880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.882154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.882194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.882209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.886640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.886679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.886694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.891095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.891134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.891149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.895462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.895684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.895703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.900029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.900070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.900084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.904433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.904472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.904487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.908878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.908918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.908940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.913280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.913319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.913344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.917629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.917668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.917683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.921978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.922017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.922031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.926383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.926421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.926436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.930638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.930677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.930692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.935047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.684 [2024-11-26 20:39:01.935087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.684 [2024-11-26 20:39:01.935103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.684 [2024-11-26 20:39:01.939454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.685 [2024-11-26 20:39:01.939492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.685 [2024-11-26 20:39:01.939507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.685 [2024-11-26 20:39:01.943862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.685 [2024-11-26 20:39:01.943909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.685 [2024-11-26 20:39:01.943923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.685 [2024-11-26 20:39:01.948235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.685 [2024-11-26 20:39:01.948272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.685 [2024-11-26 20:39:01.948286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.685 [2024-11-26 20:39:01.952685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.685 [2024-11-26 20:39:01.952739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.685 [2024-11-26 20:39:01.952754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.685 [2024-11-26 20:39:01.957133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.685 [2024-11-26 20:39:01.957173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.685 [2024-11-26 20:39:01.957187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.685 [2024-11-26 20:39:01.961581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.685 [2024-11-26 20:39:01.961814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.685 [2024-11-26 20:39:01.961838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.685 [2024-11-26 20:39:01.966398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.685 [2024-11-26 20:39:01.966438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.685 [2024-11-26 20:39:01.966453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.685 [2024-11-26 20:39:01.970780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.685 [2024-11-26 20:39:01.970818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.685 [2024-11-26 20:39:01.970833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.685 [2024-11-26 20:39:01.975297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.685 [2024-11-26 20:39:01.975334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.685 [2024-11-26 20:39:01.975349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.685 [2024-11-26 20:39:01.979752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.685 [2024-11-26 20:39:01.979791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.685 [2024-11-26 20:39:01.979805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.685 [2024-11-26 20:39:01.984374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.685 [2024-11-26 20:39:01.984428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.685 [2024-11-26 20:39:01.984457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.685 [2024-11-26 20:39:01.989028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.685 [2024-11-26 20:39:01.989099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.685 [2024-11-26 20:39:01.989114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.685 [2024-11-26 20:39:01.993702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.685 [2024-11-26 20:39:01.993757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.685 [2024-11-26 20:39:01.993772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.685 [2024-11-26 20:39:01.998335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.685 [2024-11-26 20:39:01.998372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.685 [2024-11-26 20:39:01.998387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.685 [2024-11-26 20:39:02.002793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.685 [2024-11-26 20:39:02.002832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.685 [2024-11-26 20:39:02.002854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.685 [2024-11-26 20:39:02.007414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.685 [2024-11-26 20:39:02.007452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.685 [2024-11-26 20:39:02.007466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.685 [2024-11-26 20:39:02.012015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.685 [2024-11-26 20:39:02.012076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.685 [2024-11-26 20:39:02.012092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.685 [2024-11-26 20:39:02.016588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.685 [2024-11-26 20:39:02.016626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.685 [2024-11-26 20:39:02.016641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.685 [2024-11-26 20:39:02.020964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.685 [2024-11-26 20:39:02.021002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.685 [2024-11-26 20:39:02.021017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.685 [2024-11-26 20:39:02.025450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0d9b0) 00:18:01.685 [2024-11-26 20:39:02.025486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.685 [2024-11-26 20:39:02.025500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.685 6889.50 IOPS, 861.19 MiB/s 00:18:01.685 Latency(us) 00:18:01.685 [2024-11-26T20:39:02.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.685 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:01.685 nvme0n1 : 2.00 6887.01 860.88 0.00 0.00 2319.52 1876.71 11796.48 00:18:01.685 [2024-11-26T20:39:02.040Z] =================================================================================================================== 00:18:01.685 [2024-11-26T20:39:02.040Z] Total : 6887.01 860.88 0.00 0.00 2319.52 1876.71 11796.48 00:18:01.685 { 00:18:01.685 "results": [ 00:18:01.685 { 00:18:01.685 "job": "nvme0n1", 00:18:01.685 "core_mask": "0x2", 00:18:01.685 "workload": "randread", 00:18:01.685 "status": "finished", 00:18:01.685 "queue_depth": 16, 00:18:01.685 "io_size": 131072, 00:18:01.685 "runtime": 2.003046, 00:18:01.685 "iops": 6887.011082121929, 00:18:01.685 "mibps": 860.8763852652411, 00:18:01.685 "io_failed": 0, 00:18:01.685 "io_timeout": 0, 00:18:01.685 "avg_latency_us": 2319.5236766944545, 00:18:01.685 "min_latency_us": 1876.7127272727273, 00:18:01.685 "max_latency_us": 11796.48 00:18:01.685 } 00:18:01.685 ], 00:18:01.685 "core_count": 1 00:18:01.685 } 00:18:01.943 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:01.943 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:01.943 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:01.943 | .driver_specific 00:18:01.943 | .nvme_error 00:18:01.943 | .status_code 00:18:01.943 | .command_transient_transport_error' 00:18:01.943 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:02.202 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 445 > 0 )) 00:18:02.202 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80504 00:18:02.202 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80504 ']' 00:18:02.202 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80504 00:18:02.202 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:02.202 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:02.202 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80504 00:18:02.202 killing process with pid 80504 00:18:02.202 Received shutdown signal, test time was about 2.000000 seconds 00:18:02.202 00:18:02.202 Latency(us) 00:18:02.202 [2024-11-26T20:39:02.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.202 [2024-11-26T20:39:02.557Z] =================================================================================================================== 00:18:02.202 [2024-11-26T20:39:02.557Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:02.202 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:02.202 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:02.202 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80504' 00:18:02.202 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80504 00:18:02.202 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80504 00:18:02.461 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:02.461 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:02.461 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:02.461 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:02.461 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:02.461 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80557 00:18:02.461 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:02.461 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80557 /var/tmp/bperf.sock 00:18:02.461 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80557 ']' 00:18:02.461 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:02.461 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:02.461 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:02.461 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.461 20:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:02.461 [2024-11-26 20:39:02.747966] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:18:02.461 [2024-11-26 20:39:02.748298] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80557 ] 00:18:02.719 [2024-11-26 20:39:02.896279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.719 [2024-11-26 20:39:02.972591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.719 [2024-11-26 20:39:03.045856] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:03.655 20:39:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.655 20:39:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:03.655 20:39:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:03.655 20:39:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:03.914 20:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:03.914 20:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.914 20:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:03.914 20:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.914 20:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:03.914 20:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:04.172 nvme0n1 00:18:04.172 20:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:04.172 20:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.172 20:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:04.172 20:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.172 20:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:04.172 20:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:04.431 Running I/O for 2 seconds... 00:18:04.431 [2024-11-26 20:39:04.573877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef7100 00:18:04.432 [2024-11-26 20:39:04.575631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.432 [2024-11-26 20:39:04.575679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.432 [2024-11-26 20:39:04.591508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef7970 00:18:04.432 [2024-11-26 20:39:04.593226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.432 [2024-11-26 20:39:04.593472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:04.432 [2024-11-26 20:39:04.608940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef81e0 00:18:04.432 [2024-11-26 20:39:04.610654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.432 [2024-11-26 20:39:04.610688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:04.432 [2024-11-26 20:39:04.624691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef8a50 00:18:04.432 [2024-11-26 20:39:04.626151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.432 [2024-11-26 20:39:04.626203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:04.432 [2024-11-26 20:39:04.640215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef92c0 00:18:04.432 [2024-11-26 20:39:04.641778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.432 [2024-11-26 20:39:04.641813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:04.432 [2024-11-26 20:39:04.655726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef9b30 00:18:04.432 [2024-11-26 20:39:04.657290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.432 [2024-11-26 20:39:04.657347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:04.432 [2024-11-26 20:39:04.671984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016efa3a0 00:18:04.432 [2024-11-26 20:39:04.673770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.432 [2024-11-26 20:39:04.673815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:04.432 [2024-11-26 20:39:04.687964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016efac10 00:18:04.432 [2024-11-26 20:39:04.689713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.432 [2024-11-26 20:39:04.689741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:04.432 [2024-11-26 20:39:04.703188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016efb480 00:18:04.432 [2024-11-26 20:39:04.704698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.432 [2024-11-26 20:39:04.704732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:04.432 [2024-11-26 20:39:04.718145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016efbcf0 00:18:04.432 [2024-11-26 20:39:04.719504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.432 [2024-11-26 20:39:04.719714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:04.432 [2024-11-26 20:39:04.734763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016efc560 00:18:04.432 [2024-11-26 20:39:04.736458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.432 [2024-11-26 20:39:04.736639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:04.432 [2024-11-26 20:39:04.751985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016efcdd0 00:18:04.432 [2024-11-26 20:39:04.753588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.432 [2024-11-26 20:39:04.753775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:04.432 [2024-11-26 20:39:04.768933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016efd640 00:18:04.432 [2024-11-26 20:39:04.770543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.432 [2024-11-26 20:39:04.770772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:04.692 [2024-11-26 20:39:04.786148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016efdeb0 00:18:04.692 [2024-11-26 20:39:04.787786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.692 [2024-11-26 20:39:04.787996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:04.692 [2024-11-26 20:39:04.803924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016efe720 00:18:04.692 [2024-11-26 20:39:04.805711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.692 [2024-11-26 20:39:04.805914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:04.692 [2024-11-26 20:39:04.821227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016eff3c8 00:18:04.692 [2024-11-26 20:39:04.822775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.692 [2024-11-26 20:39:04.822973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:04.692 [2024-11-26 20:39:04.845008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016eff3c8 00:18:04.692 [2024-11-26 20:39:04.847824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.692 [2024-11-26 20:39:04.848064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.692 [2024-11-26 20:39:04.861402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016efe720 00:18:04.692 [2024-11-26 20:39:04.864150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.692 [2024-11-26 20:39:04.864187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:04.692 [2024-11-26 20:39:04.877002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016efdeb0 00:18:04.692 [2024-11-26 20:39:04.879344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.692 [2024-11-26 20:39:04.879378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:04.692 [2024-11-26 20:39:04.892443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016efd640 00:18:04.692 [2024-11-26 20:39:04.894733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.692 [2024-11-26 20:39:04.894766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:04.692 [2024-11-26 20:39:04.908358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016efcdd0 00:18:04.692 [2024-11-26 20:39:04.910817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.692 [2024-11-26 20:39:04.910859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:04.692 [2024-11-26 20:39:04.924174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016efc560 00:18:04.692 [2024-11-26 20:39:04.926830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.692 [2024-11-26 20:39:04.926865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:04.692 [2024-11-26 20:39:04.940593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016efbcf0 00:18:04.692 [2024-11-26 20:39:04.943077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.692 [2024-11-26 20:39:04.943112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:04.692 [2024-11-26 20:39:04.957305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016efb480 00:18:04.692 [2024-11-26 20:39:04.959800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.692 [2024-11-26 20:39:04.960051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:04.692 [2024-11-26 20:39:04.974418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016efac10 00:18:04.692 [2024-11-26 20:39:04.977086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.692 [2024-11-26 20:39:04.977296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:04.692 [2024-11-26 20:39:04.991445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016efa3a0 00:18:04.692 [2024-11-26 20:39:04.994146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.692 [2024-11-26 20:39:04.994340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:04.692 [2024-11-26 20:39:05.008812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef9b30 00:18:04.692 [2024-11-26 20:39:05.011442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.692 [2024-11-26 20:39:05.011636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:04.692 [2024-11-26 20:39:05.026317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef92c0 00:18:04.692 [2024-11-26 20:39:05.028924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.692 [2024-11-26 20:39:05.029167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:04.692 [2024-11-26 20:39:05.042996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef8a50 00:18:04.951 [2024-11-26 20:39:05.045466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.952 [2024-11-26 20:39:05.045702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:04.952 [2024-11-26 20:39:05.059925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef81e0 00:18:04.952 [2024-11-26 20:39:05.062423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.952 [2024-11-26 20:39:05.062630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:04.952 [2024-11-26 20:39:05.076074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef7970 00:18:04.952 [2024-11-26 20:39:05.078514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.952 [2024-11-26 20:39:05.078543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:04.952 [2024-11-26 20:39:05.092126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef7100 00:18:04.952 [2024-11-26 20:39:05.094384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.952 [2024-11-26 20:39:05.094418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:04.952 [2024-11-26 20:39:05.108158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef6890 00:18:04.952 [2024-11-26 20:39:05.110464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.952 [2024-11-26 20:39:05.110505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.952 [2024-11-26 20:39:05.123845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef6020 00:18:04.952 [2024-11-26 20:39:05.126517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.952 [2024-11-26 20:39:05.126551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:04.952 [2024-11-26 20:39:05.140379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef57b0 00:18:04.952 [2024-11-26 20:39:05.142622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.952 [2024-11-26 20:39:05.142661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:04.952 [2024-11-26 20:39:05.156245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef4f40 00:18:04.952 [2024-11-26 20:39:05.158428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.952 [2024-11-26 20:39:05.158489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:04.952 [2024-11-26 20:39:05.172622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef46d0 00:18:04.952 [2024-11-26 20:39:05.174839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.952 [2024-11-26 20:39:05.174882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:04.952 [2024-11-26 20:39:05.189033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef3e60 00:18:04.952 [2024-11-26 20:39:05.191162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.952 [2024-11-26 20:39:05.191199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:04.952 [2024-11-26 20:39:05.205579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef35f0 00:18:04.952 [2024-11-26 20:39:05.208205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.952 [2024-11-26 20:39:05.208268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:04.952 [2024-11-26 20:39:05.221852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef2d80 00:18:04.952 [2024-11-26 20:39:05.224037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.952 [2024-11-26 20:39:05.224093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:04.952 [2024-11-26 20:39:05.238676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef2510 00:18:04.952 [2024-11-26 20:39:05.240834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.952 [2024-11-26 20:39:05.240868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:04.952 [2024-11-26 20:39:05.255888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef1ca0 00:18:04.952 [2024-11-26 20:39:05.257986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.952 [2024-11-26 20:39:05.258196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:04.952 [2024-11-26 20:39:05.273194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef1430 00:18:04.952 [2024-11-26 20:39:05.275262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.952 [2024-11-26 20:39:05.275307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:04.952 [2024-11-26 20:39:05.290405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef0bc0 00:18:04.952 [2024-11-26 20:39:05.292534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.952 [2024-11-26 20:39:05.292726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:05.211 [2024-11-26 20:39:05.307643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef0350 00:18:05.211 [2024-11-26 20:39:05.309700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.212 [2024-11-26 20:39:05.309736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:05.212 [2024-11-26 20:39:05.324796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016eefae0 00:18:05.212 [2024-11-26 20:39:05.326889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.212 [2024-11-26 20:39:05.327100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:05.212 [2024-11-26 20:39:05.341709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016eef270 00:18:05.212 [2024-11-26 20:39:05.343791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.212 [2024-11-26 20:39:05.343829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:05.212 [2024-11-26 20:39:05.358463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016eeea00 00:18:05.212 [2024-11-26 20:39:05.360444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.212 [2024-11-26 20:39:05.360483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.212 [2024-11-26 20:39:05.374720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016eee190 00:18:05.212 [2024-11-26 20:39:05.376594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.212 [2024-11-26 20:39:05.376629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.212 [2024-11-26 20:39:05.390714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016eed920 00:18:05.212 [2024-11-26 20:39:05.392719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.212 [2024-11-26 20:39:05.392756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:05.212 [2024-11-26 20:39:05.407331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016eed0b0 00:18:05.212 [2024-11-26 20:39:05.409553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.212 [2024-11-26 20:39:05.409592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:05.212 [2024-11-26 20:39:05.424235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016eec840 00:18:05.212 [2024-11-26 20:39:05.426137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.212 [2024-11-26 20:39:05.426330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:05.212 [2024-11-26 20:39:05.440892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016eebfd0 00:18:05.212 [2024-11-26 20:39:05.442729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.212 [2024-11-26 20:39:05.442766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:05.212 [2024-11-26 20:39:05.457507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016eeb760 00:18:05.212 [2024-11-26 20:39:05.459641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.212 [2024-11-26 20:39:05.459685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:05.212 [2024-11-26 20:39:05.474669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016eeaef0 00:18:05.212 [2024-11-26 20:39:05.476535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.212 [2024-11-26 20:39:05.476570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:05.212 [2024-11-26 20:39:05.490958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016eea680 00:18:05.212 [2024-11-26 20:39:05.492776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.212 [2024-11-26 20:39:05.492826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:05.212 [2024-11-26 20:39:05.506505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee9e10 00:18:05.212 [2024-11-26 20:39:05.508314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.212 [2024-11-26 20:39:05.508347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:05.212 [2024-11-26 20:39:05.521768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee95a0 00:18:05.212 [2024-11-26 20:39:05.523524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.212 [2024-11-26 20:39:05.523558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:05.212 [2024-11-26 20:39:05.537411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee8d30 00:18:05.212 [2024-11-26 20:39:05.539383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.212 [2024-11-26 20:39:05.539427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:05.212 [2024-11-26 20:39:05.553476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee84c0 00:18:05.212 [2024-11-26 20:39:05.555319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.212 [2024-11-26 20:39:05.555353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:05.471 15182.00 IOPS, 59.30 MiB/s [2024-11-26T20:39:05.826Z] [2024-11-26 20:39:05.570288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee7c50 00:18:05.471 [2024-11-26 20:39:05.571997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.471 [2024-11-26 20:39:05.572034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:05.471 [2024-11-26 20:39:05.585725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee73e0 00:18:05.471 [2024-11-26 20:39:05.587593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.471 [2024-11-26 20:39:05.587638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:05.471 [2024-11-26 20:39:05.602466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee6b70 00:18:05.471 [2024-11-26 20:39:05.604284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.471 [2024-11-26 20:39:05.604320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:05.471 [2024-11-26 20:39:05.620243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee6300 00:18:05.471 [2024-11-26 20:39:05.622007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.471 [2024-11-26 20:39:05.622046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.471 [2024-11-26 20:39:05.637399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee5a90 00:18:05.471 [2024-11-26 20:39:05.638890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.471 [2024-11-26 20:39:05.638923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.471 [2024-11-26 20:39:05.653147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee5220 00:18:05.471 [2024-11-26 20:39:05.654813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.471 [2024-11-26 20:39:05.654846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:05.471 [2024-11-26 20:39:05.669013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee49b0 00:18:05.471 [2024-11-26 20:39:05.670501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.471 [2024-11-26 20:39:05.670535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:05.471 [2024-11-26 20:39:05.684120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee4140 00:18:05.471 [2024-11-26 20:39:05.685580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.471 [2024-11-26 20:39:05.685612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:05.471 [2024-11-26 20:39:05.699673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee38d0 00:18:05.471 [2024-11-26 20:39:05.701127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.471 [2024-11-26 20:39:05.701160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:05.471 [2024-11-26 20:39:05.715972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee3060 00:18:05.471 [2024-11-26 20:39:05.717513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.471 [2024-11-26 20:39:05.717758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:05.471 [2024-11-26 20:39:05.733161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee27f0 00:18:05.472 [2024-11-26 20:39:05.735168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.472 [2024-11-26 20:39:05.735205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:05.472 [2024-11-26 20:39:05.750377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee1f80 00:18:05.472 [2024-11-26 20:39:05.751922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.472 [2024-11-26 20:39:05.752259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:05.472 [2024-11-26 20:39:05.767953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee1710 00:18:05.472 [2024-11-26 20:39:05.769488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.472 [2024-11-26 20:39:05.769527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:05.472 [2024-11-26 20:39:05.784960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee0ea0 00:18:05.472 [2024-11-26 20:39:05.786835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.472 [2024-11-26 20:39:05.786866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:05.472 [2024-11-26 20:39:05.802547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee0630 00:18:05.472 [2024-11-26 20:39:05.803963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.472 [2024-11-26 20:39:05.804006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:05.472 [2024-11-26 20:39:05.818578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016edfdc0 00:18:05.472 [2024-11-26 20:39:05.819965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.472 [2024-11-26 20:39:05.820006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:05.731 [2024-11-26 20:39:05.834507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016edf550 00:18:05.731 [2024-11-26 20:39:05.835811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.731 [2024-11-26 20:39:05.835846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:05.731 [2024-11-26 20:39:05.850357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016edece0 00:18:05.731 [2024-11-26 20:39:05.851639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.731 [2024-11-26 20:39:05.851675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:05.731 [2024-11-26 20:39:05.866060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ede470 00:18:05.731 [2024-11-26 20:39:05.867400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.731 [2024-11-26 20:39:05.867434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:05.731 [2024-11-26 20:39:05.887352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016eddc00 00:18:05.731 [2024-11-26 20:39:05.889808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.731 [2024-11-26 20:39:05.889843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.731 [2024-11-26 20:39:05.902386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ede470 00:18:05.731 [2024-11-26 20:39:05.905098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.731 [2024-11-26 20:39:05.905132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:05.731 [2024-11-26 20:39:05.917943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016edece0 00:18:05.731 [2024-11-26 20:39:05.920576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.731 [2024-11-26 20:39:05.920604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:05.731 [2024-11-26 20:39:05.933566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016edf550 00:18:05.731 [2024-11-26 20:39:05.935844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.731 [2024-11-26 20:39:05.936169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:05.731 [2024-11-26 20:39:05.948954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016edfdc0 00:18:05.731 [2024-11-26 20:39:05.951504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.731 [2024-11-26 20:39:05.951539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:05.732 [2024-11-26 20:39:05.964729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee0630 00:18:05.732 [2024-11-26 20:39:05.966999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.732 [2024-11-26 20:39:05.967167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:05.732 [2024-11-26 20:39:05.980209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee0ea0 00:18:05.732 [2024-11-26 20:39:05.982629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.732 [2024-11-26 20:39:05.982665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:05.732 [2024-11-26 20:39:05.996645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee1710 00:18:05.732 [2024-11-26 20:39:05.999054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.732 [2024-11-26 20:39:05.999105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:05.732 [2024-11-26 20:39:06.013731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee1f80 00:18:05.732 [2024-11-26 20:39:06.016207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.732 [2024-11-26 20:39:06.016255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:05.732 [2024-11-26 20:39:06.030805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee27f0 00:18:05.732 [2024-11-26 20:39:06.033545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.732 [2024-11-26 20:39:06.033577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:05.732 [2024-11-26 20:39:06.047813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee3060 00:18:05.732 [2024-11-26 20:39:06.050254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.732 [2024-11-26 20:39:06.050457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:05.732 [2024-11-26 20:39:06.064787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee38d0 00:18:05.732 [2024-11-26 20:39:06.067163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.732 [2024-11-26 20:39:06.067199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:05.732 [2024-11-26 20:39:06.081323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee4140 00:18:05.732 [2024-11-26 20:39:06.083769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.732 [2024-11-26 20:39:06.083800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:05.991 [2024-11-26 20:39:06.097891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee49b0 00:18:05.991 [2024-11-26 20:39:06.100230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.991 [2024-11-26 20:39:06.100294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:05.991 [2024-11-26 20:39:06.114504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee5220 00:18:05.991 [2024-11-26 20:39:06.116787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.991 [2024-11-26 20:39:06.116823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:05.991 [2024-11-26 20:39:06.130862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee5a90 00:18:05.991 [2024-11-26 20:39:06.133133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.991 [2024-11-26 20:39:06.133167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:05.991 [2024-11-26 20:39:06.147022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee6300 00:18:05.991 [2024-11-26 20:39:06.149366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.991 [2024-11-26 20:39:06.149400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.991 [2024-11-26 20:39:06.163332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee6b70 00:18:05.991 [2024-11-26 20:39:06.165891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.991 [2024-11-26 20:39:06.165941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:05.991 [2024-11-26 20:39:06.179897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee73e0 00:18:05.991 [2024-11-26 20:39:06.182115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.991 [2024-11-26 20:39:06.182151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:05.991 [2024-11-26 20:39:06.196309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee7c50 00:18:05.991 [2024-11-26 20:39:06.198399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.991 [2024-11-26 20:39:06.198434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:05.991 [2024-11-26 20:39:06.212609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee84c0 00:18:05.991 [2024-11-26 20:39:06.214675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.991 [2024-11-26 20:39:06.214708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:05.991 [2024-11-26 20:39:06.228700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee8d30 00:18:05.991 [2024-11-26 20:39:06.230791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.991 [2024-11-26 20:39:06.230825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:05.991 [2024-11-26 20:39:06.244916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee95a0 00:18:05.991 [2024-11-26 20:39:06.247036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.991 [2024-11-26 20:39:06.247069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:05.991 [2024-11-26 20:39:06.260844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ee9e10 00:18:05.991 [2024-11-26 20:39:06.262798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.991 [2024-11-26 20:39:06.262969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:05.991 [2024-11-26 20:39:06.276412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016eea680 00:18:05.991 [2024-11-26 20:39:06.278436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.991 [2024-11-26 20:39:06.278485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:05.991 [2024-11-26 20:39:06.291896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016eeaef0 00:18:05.991 [2024-11-26 20:39:06.293813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.991 [2024-11-26 20:39:06.293969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:05.991 [2024-11-26 20:39:06.307765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016eeb760 00:18:05.991 [2024-11-26 20:39:06.309742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.991 [2024-11-26 20:39:06.309777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:05.991 [2024-11-26 20:39:06.324041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016eebfd0 00:18:05.991 [2024-11-26 20:39:06.326027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.991 [2024-11-26 20:39:06.326079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:05.991 [2024-11-26 20:39:06.340326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016eec840 00:18:05.991 [2024-11-26 20:39:06.342195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.991 [2024-11-26 20:39:06.342254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:06.251 [2024-11-26 20:39:06.356683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016eed0b0 00:18:06.251 [2024-11-26 20:39:06.358716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.251 [2024-11-26 20:39:06.358748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:06.251 [2024-11-26 20:39:06.373484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016eed920 00:18:06.251 [2024-11-26 20:39:06.375833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.251 [2024-11-26 20:39:06.375873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:06.251 [2024-11-26 20:39:06.389875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016eee190 00:18:06.251 [2024-11-26 20:39:06.391709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.251 [2024-11-26 20:39:06.391941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:06.251 [2024-11-26 20:39:06.406911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016eeea00 00:18:06.251 [2024-11-26 20:39:06.409136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.251 [2024-11-26 20:39:06.409173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.251 [2024-11-26 20:39:06.423982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016eef270 00:18:06.251 [2024-11-26 20:39:06.426035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.251 [2024-11-26 20:39:06.426106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:06.251 [2024-11-26 20:39:06.440830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016eefae0 00:18:06.251 [2024-11-26 20:39:06.442634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.251 [2024-11-26 20:39:06.442666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:06.251 [2024-11-26 20:39:06.456335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef0350 00:18:06.251 [2024-11-26 20:39:06.458100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.251 [2024-11-26 20:39:06.458135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:06.251 [2024-11-26 20:39:06.472932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef0bc0 00:18:06.251 [2024-11-26 20:39:06.474877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.251 [2024-11-26 20:39:06.474912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:06.251 [2024-11-26 20:39:06.489908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef1430 00:18:06.251 [2024-11-26 20:39:06.491827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.251 [2024-11-26 20:39:06.491871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:06.251 [2024-11-26 20:39:06.506668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef1ca0 00:18:06.251 [2024-11-26 20:39:06.508543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.251 [2024-11-26 20:39:06.508567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:06.251 [2024-11-26 20:39:06.523465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef2510 00:18:06.251 [2024-11-26 20:39:06.525359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.251 [2024-11-26 20:39:06.525394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:06.251 [2024-11-26 20:39:06.540210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef2d80 00:18:06.251 [2024-11-26 20:39:06.542030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.251 [2024-11-26 20:39:06.542066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:06.251 15371.00 IOPS, 60.04 MiB/s [2024-11-26T20:39:06.606Z] [2024-11-26 20:39:06.556776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x103dae0) with pdu=0x200016ef35f0 00:18:06.251 [2024-11-26 20:39:06.558531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.251 [2024-11-26 20:39:06.558565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:06.251 00:18:06.251 Latency(us) 00:18:06.251 [2024-11-26T20:39:06.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.251 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:06.251 nvme0n1 : 2.01 15388.09 60.11 0.00 0.00 8309.76 7298.33 28835.84 00:18:06.251 [2024-11-26T20:39:06.606Z] =================================================================================================================== 00:18:06.251 [2024-11-26T20:39:06.606Z] Total : 15388.09 60.11 0.00 0.00 8309.76 7298.33 28835.84 00:18:06.251 { 00:18:06.251 "results": [ 00:18:06.251 { 00:18:06.251 "job": "nvme0n1", 00:18:06.251 "core_mask": "0x2", 00:18:06.251 "workload": "randwrite", 00:18:06.251 "status": "finished", 00:18:06.251 "queue_depth": 128, 00:18:06.251 "io_size": 4096, 00:18:06.251 "runtime": 2.006097, 00:18:06.251 "iops": 15388.089409435337, 00:18:06.251 "mibps": 60.109724255606785, 00:18:06.251 "io_failed": 0, 00:18:06.251 "io_timeout": 0, 00:18:06.251 "avg_latency_us": 8309.755687722707, 00:18:06.251 "min_latency_us": 7298.327272727272, 00:18:06.251 "max_latency_us": 28835.84 00:18:06.251 } 00:18:06.251 ], 00:18:06.251 "core_count": 1 00:18:06.251 } 00:18:06.251 20:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:06.251 20:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:06.251 20:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:06.251 20:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:06.251 | .driver_specific 00:18:06.251 | .nvme_error 00:18:06.251 | .status_code 00:18:06.251 | .command_transient_transport_error' 00:18:06.819 20:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 121 > 0 )) 00:18:06.819 20:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80557 00:18:06.819 20:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80557 ']' 00:18:06.819 20:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80557 00:18:06.819 20:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:06.819 20:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.819 20:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80557 00:18:06.819 killing process with pid 80557 00:18:06.819 Received shutdown signal, test time was about 2.000000 seconds 00:18:06.819 00:18:06.819 Latency(us) 00:18:06.819 [2024-11-26T20:39:07.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.819 [2024-11-26T20:39:07.174Z] =================================================================================================================== 00:18:06.819 [2024-11-26T20:39:07.174Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:06.819 20:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:06.819 20:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:06.819 20:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80557' 00:18:06.819 20:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80557 00:18:06.819 20:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80557 00:18:07.078 20:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:07.078 20:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:07.078 20:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:07.078 20:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:07.078 20:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:07.078 20:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80617 00:18:07.078 20:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:07.078 20:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80617 /var/tmp/bperf.sock 00:18:07.078 20:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80617 ']' 00:18:07.078 20:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:07.078 20:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.078 20:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:07.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:07.078 20:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.078 20:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:07.078 [2024-11-26 20:39:07.275461] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:18:07.078 [2024-11-26 20:39:07.275888] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80617 ] 00:18:07.078 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:07.078 Zero copy mechanism will not be used. 00:18:07.079 [2024-11-26 20:39:07.426604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.338 [2024-11-26 20:39:07.505889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.338 [2024-11-26 20:39:07.580916] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:08.274 20:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.274 20:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:08.274 20:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:08.274 20:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:08.274 20:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:08.274 20:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.274 20:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:08.274 20:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.274 20:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:08.274 20:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:08.532 nvme0n1 00:18:08.791 20:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:08.791 20:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.791 20:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:08.791 20:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.791 20:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:08.791 20:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:08.791 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:08.791 Zero copy mechanism will not be used. 00:18:08.791 Running I/O for 2 seconds... 00:18:08.791 [2024-11-26 20:39:09.038457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:08.791 [2024-11-26 20:39:09.038896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.791 [2024-11-26 20:39:09.038930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.791 [2024-11-26 20:39:09.045012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:08.791 [2024-11-26 20:39:09.045122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.791 [2024-11-26 20:39:09.045148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.791 [2024-11-26 20:39:09.050822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:08.791 [2024-11-26 20:39:09.050901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.791 [2024-11-26 20:39:09.050925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.791 [2024-11-26 20:39:09.056774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:08.791 [2024-11-26 20:39:09.056877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.791 [2024-11-26 20:39:09.056902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.791 [2024-11-26 20:39:09.062705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:08.791 [2024-11-26 20:39:09.062805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.792 [2024-11-26 20:39:09.062846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.792 [2024-11-26 20:39:09.068557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:08.792 [2024-11-26 20:39:09.068654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.792 [2024-11-26 20:39:09.068678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.792 [2024-11-26 20:39:09.074244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:08.792 [2024-11-26 20:39:09.074323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.792 [2024-11-26 20:39:09.074349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.792 [2024-11-26 20:39:09.079957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:08.792 [2024-11-26 20:39:09.080060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.792 [2024-11-26 20:39:09.080083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.792 [2024-11-26 20:39:09.085785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:08.792 [2024-11-26 20:39:09.086138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.792 [2024-11-26 20:39:09.086163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.792 [2024-11-26 20:39:09.092039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:08.792 [2024-11-26 20:39:09.092337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.792 [2024-11-26 20:39:09.092670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.792 [2024-11-26 20:39:09.097957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:08.792 [2024-11-26 20:39:09.098206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.792 [2024-11-26 20:39:09.098385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.792 [2024-11-26 20:39:09.103910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:08.792 [2024-11-26 20:39:09.104166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.792 [2024-11-26 20:39:09.104358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.792 [2024-11-26 20:39:09.109841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:08.792 [2024-11-26 20:39:09.110109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.792 [2024-11-26 20:39:09.110284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.792 [2024-11-26 20:39:09.115700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:08.792 [2024-11-26 20:39:09.115974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.792 [2024-11-26 20:39:09.116285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.792 [2024-11-26 20:39:09.121700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:08.792 [2024-11-26 20:39:09.121967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.792 [2024-11-26 20:39:09.122145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.792 [2024-11-26 20:39:09.127673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:08.792 [2024-11-26 20:39:09.127968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.792 [2024-11-26 20:39:09.128190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.792 [2024-11-26 20:39:09.133657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:08.792 [2024-11-26 20:39:09.133973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.792 [2024-11-26 20:39:09.134172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.792 [2024-11-26 20:39:09.139788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:08.792 [2024-11-26 20:39:09.140064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.792 [2024-11-26 20:39:09.140240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.145757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.146049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.146309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.151735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.152040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.152198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.157669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.157954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.158130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.163538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.163791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.163817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.169456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.169556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.169580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.175169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.175305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.175329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.180858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.180940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.180965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.186555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.186668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.186693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.192351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.192434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.192458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.198245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.198619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.198643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.204749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.205029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.205278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.210696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.210960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.211119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.216683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.216938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.217092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.222670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.222957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.223140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.228847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.229143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.229437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.234957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.235265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.235510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.241152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.241455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.241590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.247013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.247115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.247139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.252751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.252831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.252854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.258390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.258468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.258491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.264011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.264106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.264130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.269964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.270320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.270344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.276155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.276279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.276320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.282105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.282385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.282408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.288154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.288259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.288296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.293670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.293910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.293932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.299342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.299444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.299467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.304792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.305084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.305108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.310615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.310693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.310714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.316067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.316144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.316166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.322105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.322217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.322241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.328125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.328206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.328229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.334056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.334181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.334205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.340119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.340212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.340236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.346368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.346463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.346487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.352551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.352664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.352685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.357935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.358012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.358034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.363469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.363588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.363611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.369601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.369676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.369698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.375606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.375709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.375732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.381499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.381621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.381643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.387160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.387312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.387335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.392971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.393454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.393476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.051 [2024-11-26 20:39:09.398992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.051 [2024-11-26 20:39:09.399070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.051 [2024-11-26 20:39:09.399093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.312 [2024-11-26 20:39:09.404565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.312 [2024-11-26 20:39:09.404643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.312 [2024-11-26 20:39:09.404665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.312 [2024-11-26 20:39:09.410189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.312 [2024-11-26 20:39:09.410326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.312 [2024-11-26 20:39:09.410350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.312 [2024-11-26 20:39:09.415815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.312 [2024-11-26 20:39:09.415942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.415964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.422084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.422167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.422191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.428169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.428301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.428327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.434117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.434199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.434223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.440082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.440165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.440189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.445901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.446011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.446034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.451607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.451689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.451712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.457338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.457455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.457477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.462733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.462811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.462833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.468237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.468357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.468379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.473623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.473700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.473722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.479384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.479520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.479542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.485364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.485529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.485551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.491180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.491323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.491348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.497157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.497537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.497559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.503281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.503375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.503415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.509494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.509584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.509607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.515653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.515734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.515758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.521468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.521567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.521589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.527405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.527527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.527549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.533604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.533735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.533759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.539701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.539830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.539854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.545727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.545829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.545854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.551767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.551849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.551873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.558018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.558117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.558141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.563907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.563995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.564032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.569902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.569983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.313 [2024-11-26 20:39:09.570006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.313 [2024-11-26 20:39:09.575921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.313 [2024-11-26 20:39:09.576017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.314 [2024-11-26 20:39:09.576066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.314 [2024-11-26 20:39:09.581991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.314 [2024-11-26 20:39:09.582090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.314 [2024-11-26 20:39:09.582113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.314 [2024-11-26 20:39:09.587964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.314 [2024-11-26 20:39:09.588102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.314 [2024-11-26 20:39:09.588125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.314 [2024-11-26 20:39:09.594134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.314 [2024-11-26 20:39:09.594259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.314 [2024-11-26 20:39:09.594282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.314 [2024-11-26 20:39:09.600249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.314 [2024-11-26 20:39:09.600423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.314 [2024-11-26 20:39:09.600452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.314 [2024-11-26 20:39:09.606700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.314 [2024-11-26 20:39:09.606815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.314 [2024-11-26 20:39:09.606841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.314 [2024-11-26 20:39:09.612897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.314 [2024-11-26 20:39:09.613359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.314 [2024-11-26 20:39:09.613383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.314 [2024-11-26 20:39:09.619404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.314 [2024-11-26 20:39:09.619536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.314 [2024-11-26 20:39:09.619570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.314 [2024-11-26 20:39:09.625513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.314 [2024-11-26 20:39:09.625609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.314 [2024-11-26 20:39:09.625630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.314 [2024-11-26 20:39:09.631398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.314 [2024-11-26 20:39:09.631523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.314 [2024-11-26 20:39:09.631544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.314 [2024-11-26 20:39:09.636993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.314 [2024-11-26 20:39:09.637293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.314 [2024-11-26 20:39:09.637316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.314 [2024-11-26 20:39:09.642717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.314 [2024-11-26 20:39:09.642793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.314 [2024-11-26 20:39:09.642814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.314 [2024-11-26 20:39:09.648098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.314 [2024-11-26 20:39:09.648173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.314 [2024-11-26 20:39:09.648194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.314 [2024-11-26 20:39:09.653416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.314 [2024-11-26 20:39:09.653508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.314 [2024-11-26 20:39:09.653529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.314 [2024-11-26 20:39:09.659411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.314 [2024-11-26 20:39:09.659520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.314 [2024-11-26 20:39:09.659541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.574 [2024-11-26 20:39:09.665478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.574 [2024-11-26 20:39:09.665553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.574 [2024-11-26 20:39:09.665575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.574 [2024-11-26 20:39:09.671273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.574 [2024-11-26 20:39:09.671379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.574 [2024-11-26 20:39:09.671402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.574 [2024-11-26 20:39:09.677147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.574 [2024-11-26 20:39:09.677574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.574 [2024-11-26 20:39:09.677598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.574 [2024-11-26 20:39:09.683022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.574 [2024-11-26 20:39:09.683131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.574 [2024-11-26 20:39:09.683153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.574 [2024-11-26 20:39:09.688578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.574 [2024-11-26 20:39:09.688686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.574 [2024-11-26 20:39:09.688709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.574 [2024-11-26 20:39:09.693996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.574 [2024-11-26 20:39:09.694076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.574 [2024-11-26 20:39:09.694099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.574 [2024-11-26 20:39:09.699779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.574 [2024-11-26 20:39:09.699862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-26 20:39:09.699886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.575 [2024-11-26 20:39:09.706053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.575 [2024-11-26 20:39:09.706134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-26 20:39:09.706159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.575 [2024-11-26 20:39:09.712028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.575 [2024-11-26 20:39:09.712146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-26 20:39:09.712171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.575 [2024-11-26 20:39:09.717901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.575 [2024-11-26 20:39:09.718008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-26 20:39:09.718032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.575 [2024-11-26 20:39:09.723626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.575 [2024-11-26 20:39:09.723710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-26 20:39:09.723734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.575 [2024-11-26 20:39:09.729770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.575 [2024-11-26 20:39:09.729852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-26 20:39:09.729876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.575 [2024-11-26 20:39:09.736107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.575 [2024-11-26 20:39:09.736201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-26 20:39:09.736223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.575 [2024-11-26 20:39:09.742189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.575 [2024-11-26 20:39:09.742285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-26 20:39:09.742306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.575 [2024-11-26 20:39:09.747659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.575 [2024-11-26 20:39:09.747755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-26 20:39:09.747777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.575 [2024-11-26 20:39:09.753222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.575 [2024-11-26 20:39:09.753332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-26 20:39:09.753354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.575 [2024-11-26 20:39:09.758765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.575 [2024-11-26 20:39:09.758843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-26 20:39:09.758865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.575 [2024-11-26 20:39:09.764840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.575 [2024-11-26 20:39:09.765203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-26 20:39:09.765227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.575 [2024-11-26 20:39:09.771031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.575 [2024-11-26 20:39:09.771129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-26 20:39:09.771152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.575 [2024-11-26 20:39:09.776925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.575 [2024-11-26 20:39:09.777200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-26 20:39:09.777224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.575 [2024-11-26 20:39:09.782976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.575 [2024-11-26 20:39:09.783105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-26 20:39:09.783128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.575 [2024-11-26 20:39:09.788942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.575 [2024-11-26 20:39:09.789194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-26 20:39:09.789217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.575 [2024-11-26 20:39:09.794862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.575 [2024-11-26 20:39:09.794939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-26 20:39:09.794960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.575 [2024-11-26 20:39:09.800392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.575 [2024-11-26 20:39:09.800500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-26 20:39:09.800523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.575 [2024-11-26 20:39:09.806061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.575 [2024-11-26 20:39:09.806156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-26 20:39:09.806178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.575 [2024-11-26 20:39:09.811624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.575 [2024-11-26 20:39:09.811736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-26 20:39:09.811777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.575 [2024-11-26 20:39:09.817684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.575 [2024-11-26 20:39:09.817780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-26 20:39:09.817805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.575 [2024-11-26 20:39:09.823600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.575 [2024-11-26 20:39:09.823711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-26 20:39:09.823737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.576 [2024-11-26 20:39:09.829518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.576 [2024-11-26 20:39:09.829633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-26 20:39:09.829658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.576 [2024-11-26 20:39:09.835486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.576 [2024-11-26 20:39:09.835587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-26 20:39:09.835612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.576 [2024-11-26 20:39:09.841424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.576 [2024-11-26 20:39:09.841506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-26 20:39:09.841529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.576 [2024-11-26 20:39:09.847260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.576 [2024-11-26 20:39:09.847368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-26 20:39:09.847391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.576 [2024-11-26 20:39:09.852977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.576 [2024-11-26 20:39:09.853371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-26 20:39:09.853395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.576 [2024-11-26 20:39:09.859099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.576 [2024-11-26 20:39:09.859181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-26 20:39:09.859205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.576 [2024-11-26 20:39:09.864871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.576 [2024-11-26 20:39:09.865109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-26 20:39:09.865133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.576 [2024-11-26 20:39:09.870999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.576 [2024-11-26 20:39:09.871097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-26 20:39:09.871120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.576 [2024-11-26 20:39:09.876965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.576 [2024-11-26 20:39:09.877220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-26 20:39:09.877245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.576 [2024-11-26 20:39:09.883007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.576 [2024-11-26 20:39:09.883164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-26 20:39:09.883187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.576 [2024-11-26 20:39:09.888970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.576 [2024-11-26 20:39:09.889228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-26 20:39:09.889252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.576 [2024-11-26 20:39:09.895027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.576 [2024-11-26 20:39:09.895127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-26 20:39:09.895151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.576 [2024-11-26 20:39:09.900815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.576 [2024-11-26 20:39:09.901072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-26 20:39:09.901096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.576 [2024-11-26 20:39:09.906730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.576 [2024-11-26 20:39:09.906829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-26 20:39:09.906852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.576 [2024-11-26 20:39:09.912433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.576 [2024-11-26 20:39:09.912570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-26 20:39:09.912592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.576 [2024-11-26 20:39:09.918315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.576 [2024-11-26 20:39:09.918408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-26 20:39:09.918431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.576 [2024-11-26 20:39:09.924397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.576 [2024-11-26 20:39:09.924500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-26 20:39:09.924523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.837 [2024-11-26 20:39:09.930428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.837 [2024-11-26 20:39:09.930566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.837 [2024-11-26 20:39:09.930590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.837 [2024-11-26 20:39:09.936531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.837 [2024-11-26 20:39:09.936627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.837 [2024-11-26 20:39:09.936650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.837 [2024-11-26 20:39:09.942519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.837 [2024-11-26 20:39:09.942612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.837 [2024-11-26 20:39:09.942634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.837 [2024-11-26 20:39:09.948113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.837 [2024-11-26 20:39:09.948552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.837 [2024-11-26 20:39:09.948574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.837 [2024-11-26 20:39:09.954003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.837 [2024-11-26 20:39:09.954133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.837 [2024-11-26 20:39:09.954155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.837 [2024-11-26 20:39:09.959649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.837 [2024-11-26 20:39:09.959766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.837 [2024-11-26 20:39:09.959790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.837 [2024-11-26 20:39:09.965188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.837 [2024-11-26 20:39:09.965311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.837 [2024-11-26 20:39:09.965334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.837 [2024-11-26 20:39:09.970860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.837 [2024-11-26 20:39:09.971267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.837 [2024-11-26 20:39:09.971289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.837 [2024-11-26 20:39:09.977128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.837 [2024-11-26 20:39:09.977226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.837 [2024-11-26 20:39:09.977250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.837 [2024-11-26 20:39:09.983112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.837 [2024-11-26 20:39:09.983217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.837 [2024-11-26 20:39:09.983242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.837 [2024-11-26 20:39:09.989319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.837 [2024-11-26 20:39:09.989446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.837 [2024-11-26 20:39:09.989485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.837 [2024-11-26 20:39:09.995265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.837 [2024-11-26 20:39:09.995380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.837 [2024-11-26 20:39:09.995420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.837 [2024-11-26 20:39:10.001241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.837 [2024-11-26 20:39:10.001363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.837 [2024-11-26 20:39:10.001388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.837 [2024-11-26 20:39:10.006986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.837 [2024-11-26 20:39:10.007081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.837 [2024-11-26 20:39:10.007105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.837 [2024-11-26 20:39:10.012967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.837 [2024-11-26 20:39:10.013066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.837 [2024-11-26 20:39:10.013091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.837 [2024-11-26 20:39:10.018804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.837 [2024-11-26 20:39:10.018914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.837 [2024-11-26 20:39:10.018939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.837 [2024-11-26 20:39:10.024766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.837 [2024-11-26 20:39:10.024861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.837 [2024-11-26 20:39:10.024887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.837 [2024-11-26 20:39:10.030635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.837 [2024-11-26 20:39:10.030771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.837 [2024-11-26 20:39:10.030794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.837 5223.00 IOPS, 652.88 MiB/s [2024-11-26T20:39:10.192Z] [2024-11-26 20:39:10.037994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.837 [2024-11-26 20:39:10.038091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.837 [2024-11-26 20:39:10.038125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.837 [2024-11-26 20:39:10.043754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.837 [2024-11-26 20:39:10.043846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.837 [2024-11-26 20:39:10.043871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.837 [2024-11-26 20:39:10.049611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.837 [2024-11-26 20:39:10.049726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.837 [2024-11-26 20:39:10.049749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.837 [2024-11-26 20:39:10.055371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.837 [2024-11-26 20:39:10.055479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.837 [2024-11-26 20:39:10.055501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.837 [2024-11-26 20:39:10.061125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.837 [2024-11-26 20:39:10.061235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.837 [2024-11-26 20:39:10.061273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.837 [2024-11-26 20:39:10.066828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.837 [2024-11-26 20:39:10.066903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.837 [2024-11-26 20:39:10.066926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.837 [2024-11-26 20:39:10.072498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.837 [2024-11-26 20:39:10.072575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.837 [2024-11-26 20:39:10.072599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.837 [2024-11-26 20:39:10.078089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.838 [2024-11-26 20:39:10.078191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.838 [2024-11-26 20:39:10.078213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.838 [2024-11-26 20:39:10.083640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.838 [2024-11-26 20:39:10.083735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.838 [2024-11-26 20:39:10.083757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.838 [2024-11-26 20:39:10.089155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.838 [2024-11-26 20:39:10.089246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.838 [2024-11-26 20:39:10.089270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.838 [2024-11-26 20:39:10.095119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.838 [2024-11-26 20:39:10.095227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.838 [2024-11-26 20:39:10.095250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.838 [2024-11-26 20:39:10.101089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.838 [2024-11-26 20:39:10.101181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.838 [2024-11-26 20:39:10.101205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.838 [2024-11-26 20:39:10.106892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.838 [2024-11-26 20:39:10.106976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.838 [2024-11-26 20:39:10.106999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.838 [2024-11-26 20:39:10.112730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.838 [2024-11-26 20:39:10.112847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.838 [2024-11-26 20:39:10.112869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.838 [2024-11-26 20:39:10.118617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.838 [2024-11-26 20:39:10.118694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.838 [2024-11-26 20:39:10.118716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.838 [2024-11-26 20:39:10.124581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.838 [2024-11-26 20:39:10.124668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.838 [2024-11-26 20:39:10.124691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.838 [2024-11-26 20:39:10.130325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.838 [2024-11-26 20:39:10.130470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.838 [2024-11-26 20:39:10.130491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.838 [2024-11-26 20:39:10.136282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.838 [2024-11-26 20:39:10.136402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.838 [2024-11-26 20:39:10.136424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.838 [2024-11-26 20:39:10.141978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.838 [2024-11-26 20:39:10.142102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.838 [2024-11-26 20:39:10.142124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.838 [2024-11-26 20:39:10.147689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.838 [2024-11-26 20:39:10.147769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.838 [2024-11-26 20:39:10.147792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.838 [2024-11-26 20:39:10.153423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.838 [2024-11-26 20:39:10.153548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.838 [2024-11-26 20:39:10.153569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.838 [2024-11-26 20:39:10.159331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.838 [2024-11-26 20:39:10.159463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.838 [2024-11-26 20:39:10.159485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.838 [2024-11-26 20:39:10.165414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.838 [2024-11-26 20:39:10.165526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.838 [2024-11-26 20:39:10.165548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.838 [2024-11-26 20:39:10.171298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.838 [2024-11-26 20:39:10.171409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.838 [2024-11-26 20:39:10.171431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.838 [2024-11-26 20:39:10.177284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.838 [2024-11-26 20:39:10.177385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.838 [2024-11-26 20:39:10.177409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.838 [2024-11-26 20:39:10.183024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:09.838 [2024-11-26 20:39:10.183116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.838 [2024-11-26 20:39:10.183137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.100 [2024-11-26 20:39:10.188686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.100 [2024-11-26 20:39:10.188778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.100 [2024-11-26 20:39:10.188799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.100 [2024-11-26 20:39:10.194330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.100 [2024-11-26 20:39:10.194464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.100 [2024-11-26 20:39:10.194485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.100 [2024-11-26 20:39:10.200206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.100 [2024-11-26 20:39:10.200296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.100 [2024-11-26 20:39:10.200320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.100 [2024-11-26 20:39:10.206023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.100 [2024-11-26 20:39:10.206149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.100 [2024-11-26 20:39:10.206174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.100 [2024-11-26 20:39:10.211854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.100 [2024-11-26 20:39:10.212031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.100 [2024-11-26 20:39:10.212055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.100 [2024-11-26 20:39:10.217864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.100 [2024-11-26 20:39:10.217975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.100 [2024-11-26 20:39:10.218000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.100 [2024-11-26 20:39:10.223985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.100 [2024-11-26 20:39:10.224116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.100 [2024-11-26 20:39:10.224140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.100 [2024-11-26 20:39:10.230004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.100 [2024-11-26 20:39:10.230136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.100 [2024-11-26 20:39:10.230160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.100 [2024-11-26 20:39:10.236200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.100 [2024-11-26 20:39:10.236329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.100 [2024-11-26 20:39:10.236352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.100 [2024-11-26 20:39:10.242183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.100 [2024-11-26 20:39:10.242281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.100 [2024-11-26 20:39:10.242305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.100 [2024-11-26 20:39:10.248068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.100 [2024-11-26 20:39:10.248152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.100 [2024-11-26 20:39:10.248174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.100 [2024-11-26 20:39:10.253805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.100 [2024-11-26 20:39:10.253906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.100 [2024-11-26 20:39:10.253927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.100 [2024-11-26 20:39:10.259570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.100 [2024-11-26 20:39:10.259659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.100 [2024-11-26 20:39:10.259682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.100 [2024-11-26 20:39:10.265506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.100 [2024-11-26 20:39:10.265608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.100 [2024-11-26 20:39:10.265629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.100 [2024-11-26 20:39:10.271438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.100 [2024-11-26 20:39:10.271539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.100 [2024-11-26 20:39:10.271588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.100 [2024-11-26 20:39:10.277336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.100 [2024-11-26 20:39:10.277432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.100 [2024-11-26 20:39:10.277456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.100 [2024-11-26 20:39:10.283132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.100 [2024-11-26 20:39:10.283235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.100 [2024-11-26 20:39:10.283258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.100 [2024-11-26 20:39:10.288899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.100 [2024-11-26 20:39:10.288974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.100 [2024-11-26 20:39:10.288996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.100 [2024-11-26 20:39:10.294665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.100 [2024-11-26 20:39:10.294776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.100 [2024-11-26 20:39:10.294798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.100 [2024-11-26 20:39:10.300134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.100 [2024-11-26 20:39:10.300209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.100 [2024-11-26 20:39:10.300231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.305454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.305530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.305551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.311082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.311162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.311186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.317075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.317154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.317177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.322894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.322971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.322992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.328769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.328846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.328868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.334593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.334701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.334722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.340110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.340184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.340206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.345683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.345760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.345783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.351101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.351176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.351197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.356596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.356697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.356718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.362205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.362332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.362356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.368216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.368310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.368334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.374011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.374121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.374144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.380051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.380167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.380190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.385977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.386087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.386111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.391884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.391975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.391998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.397789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.397868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.397891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.403661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.403742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.403766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.409710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.409810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.409832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.415744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.415848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.415876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.422025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.422177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.422218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.428148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.428268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.428292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.434322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.434396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.434448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.440113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.440219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.440242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.445735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.445840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.445860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.101 [2024-11-26 20:39:10.451065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.101 [2024-11-26 20:39:10.451171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.101 [2024-11-26 20:39:10.451191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.363 [2024-11-26 20:39:10.457277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.363 [2024-11-26 20:39:10.457404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.363 [2024-11-26 20:39:10.457442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.363 [2024-11-26 20:39:10.463406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.363 [2024-11-26 20:39:10.463544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.363 [2024-11-26 20:39:10.463594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.363 [2024-11-26 20:39:10.469529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.363 [2024-11-26 20:39:10.469650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.363 [2024-11-26 20:39:10.469671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.363 [2024-11-26 20:39:10.475333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.363 [2024-11-26 20:39:10.475470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.363 [2024-11-26 20:39:10.475491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.363 [2024-11-26 20:39:10.480705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.363 [2024-11-26 20:39:10.480811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.363 [2024-11-26 20:39:10.480832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.363 [2024-11-26 20:39:10.486025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.363 [2024-11-26 20:39:10.486126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.363 [2024-11-26 20:39:10.486147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.363 [2024-11-26 20:39:10.491381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.363 [2024-11-26 20:39:10.491477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.363 [2024-11-26 20:39:10.491515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.363 [2024-11-26 20:39:10.497113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.363 [2024-11-26 20:39:10.497239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.363 [2024-11-26 20:39:10.497263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.363 [2024-11-26 20:39:10.502672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.363 [2024-11-26 20:39:10.502777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.363 [2024-11-26 20:39:10.502799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.363 [2024-11-26 20:39:10.508596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.363 [2024-11-26 20:39:10.508713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.363 [2024-11-26 20:39:10.508735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.363 [2024-11-26 20:39:10.514611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.363 [2024-11-26 20:39:10.514715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.363 [2024-11-26 20:39:10.514736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.363 [2024-11-26 20:39:10.520681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.363 [2024-11-26 20:39:10.520769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.363 [2024-11-26 20:39:10.520796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.363 [2024-11-26 20:39:10.526483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.363 [2024-11-26 20:39:10.526567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.363 [2024-11-26 20:39:10.526589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.363 [2024-11-26 20:39:10.532484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.363 [2024-11-26 20:39:10.532600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.363 [2024-11-26 20:39:10.532623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.363 [2024-11-26 20:39:10.538152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.363 [2024-11-26 20:39:10.538276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.363 [2024-11-26 20:39:10.538313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.363 [2024-11-26 20:39:10.544023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.363 [2024-11-26 20:39:10.544129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.363 [2024-11-26 20:39:10.544151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.363 [2024-11-26 20:39:10.549815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.363 [2024-11-26 20:39:10.549924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.363 [2024-11-26 20:39:10.549947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.363 [2024-11-26 20:39:10.555604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.363 [2024-11-26 20:39:10.555686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.363 [2024-11-26 20:39:10.555709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.363 [2024-11-26 20:39:10.561278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.363 [2024-11-26 20:39:10.561390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.363 [2024-11-26 20:39:10.561414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.363 [2024-11-26 20:39:10.567206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.363 [2024-11-26 20:39:10.567317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.363 [2024-11-26 20:39:10.567341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.363 [2024-11-26 20:39:10.573128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.363 [2024-11-26 20:39:10.573232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.363 [2024-11-26 20:39:10.573255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.363 [2024-11-26 20:39:10.579022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.363 [2024-11-26 20:39:10.579133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.364 [2024-11-26 20:39:10.579156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.364 [2024-11-26 20:39:10.584980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.364 [2024-11-26 20:39:10.585102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.364 [2024-11-26 20:39:10.585125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.364 [2024-11-26 20:39:10.590842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.364 [2024-11-26 20:39:10.590935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.364 [2024-11-26 20:39:10.590957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.364 [2024-11-26 20:39:10.596536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.364 [2024-11-26 20:39:10.596646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.364 [2024-11-26 20:39:10.596667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.364 [2024-11-26 20:39:10.602194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.364 [2024-11-26 20:39:10.602310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.364 [2024-11-26 20:39:10.602344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.364 [2024-11-26 20:39:10.608235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.364 [2024-11-26 20:39:10.608355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.364 [2024-11-26 20:39:10.608378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.364 [2024-11-26 20:39:10.614294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.364 [2024-11-26 20:39:10.614374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.364 [2024-11-26 20:39:10.614397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.364 [2024-11-26 20:39:10.620035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.364 [2024-11-26 20:39:10.620145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.364 [2024-11-26 20:39:10.620168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.364 [2024-11-26 20:39:10.625745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.364 [2024-11-26 20:39:10.625825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.364 [2024-11-26 20:39:10.625848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.364 [2024-11-26 20:39:10.631512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.364 [2024-11-26 20:39:10.631630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.364 [2024-11-26 20:39:10.631654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.364 [2024-11-26 20:39:10.637508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.364 [2024-11-26 20:39:10.637621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.364 [2024-11-26 20:39:10.637644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.364 [2024-11-26 20:39:10.643423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.364 [2024-11-26 20:39:10.643501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.364 [2024-11-26 20:39:10.643524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.364 [2024-11-26 20:39:10.649290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.364 [2024-11-26 20:39:10.649380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.364 [2024-11-26 20:39:10.649419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.364 [2024-11-26 20:39:10.655005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.364 [2024-11-26 20:39:10.655099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.364 [2024-11-26 20:39:10.655123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.364 [2024-11-26 20:39:10.660808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.364 [2024-11-26 20:39:10.660905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.364 [2024-11-26 20:39:10.660928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.364 [2024-11-26 20:39:10.666529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.364 [2024-11-26 20:39:10.666609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.364 [2024-11-26 20:39:10.666634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.364 [2024-11-26 20:39:10.672285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.364 [2024-11-26 20:39:10.672390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.364 [2024-11-26 20:39:10.672413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.364 [2024-11-26 20:39:10.678040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.364 [2024-11-26 20:39:10.678135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.364 [2024-11-26 20:39:10.678158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.364 [2024-11-26 20:39:10.684005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.364 [2024-11-26 20:39:10.684120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.364 [2024-11-26 20:39:10.684144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.364 [2024-11-26 20:39:10.689868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.364 [2024-11-26 20:39:10.689968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.364 [2024-11-26 20:39:10.689991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.364 [2024-11-26 20:39:10.695725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.364 [2024-11-26 20:39:10.695804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.364 [2024-11-26 20:39:10.695827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.364 [2024-11-26 20:39:10.701647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.364 [2024-11-26 20:39:10.701743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.364 [2024-11-26 20:39:10.701764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.364 [2024-11-26 20:39:10.707393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.364 [2024-11-26 20:39:10.707484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.364 [2024-11-26 20:39:10.707506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.364 [2024-11-26 20:39:10.713053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.364 [2024-11-26 20:39:10.713161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.364 [2024-11-26 20:39:10.713182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.626 [2024-11-26 20:39:10.718659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.626 [2024-11-26 20:39:10.718732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.626 [2024-11-26 20:39:10.718754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.626 [2024-11-26 20:39:10.724305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.626 [2024-11-26 20:39:10.724449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.626 [2024-11-26 20:39:10.724471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.626 [2024-11-26 20:39:10.730553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.626 [2024-11-26 20:39:10.730661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.626 [2024-11-26 20:39:10.730688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.626 [2024-11-26 20:39:10.736680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.626 [2024-11-26 20:39:10.736759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.626 [2024-11-26 20:39:10.736782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.626 [2024-11-26 20:39:10.742684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.626 [2024-11-26 20:39:10.742764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.626 [2024-11-26 20:39:10.742787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.626 [2024-11-26 20:39:10.748672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.626 [2024-11-26 20:39:10.748806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.626 [2024-11-26 20:39:10.748831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.626 [2024-11-26 20:39:10.754755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.626 [2024-11-26 20:39:10.754835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.626 [2024-11-26 20:39:10.754859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.626 [2024-11-26 20:39:10.760873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.626 [2024-11-26 20:39:10.760952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.626 [2024-11-26 20:39:10.760975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.626 [2024-11-26 20:39:10.767061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.626 [2024-11-26 20:39:10.767136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.626 [2024-11-26 20:39:10.767158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.626 [2024-11-26 20:39:10.773305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.626 [2024-11-26 20:39:10.773435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.626 [2024-11-26 20:39:10.773458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.626 [2024-11-26 20:39:10.779492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.626 [2024-11-26 20:39:10.779615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.626 [2024-11-26 20:39:10.779639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.626 [2024-11-26 20:39:10.785351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.626 [2024-11-26 20:39:10.785488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.626 [2024-11-26 20:39:10.785510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.626 [2024-11-26 20:39:10.790916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.626 [2024-11-26 20:39:10.790988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.626 [2024-11-26 20:39:10.791009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.626 [2024-11-26 20:39:10.796650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.626 [2024-11-26 20:39:10.796720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.626 [2024-11-26 20:39:10.796742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.626 [2024-11-26 20:39:10.801901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.626 [2024-11-26 20:39:10.801973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.626 [2024-11-26 20:39:10.801994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.626 [2024-11-26 20:39:10.806939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.626 [2024-11-26 20:39:10.807029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.626 [2024-11-26 20:39:10.807050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.626 [2024-11-26 20:39:10.812086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.626 [2024-11-26 20:39:10.812178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.626 [2024-11-26 20:39:10.812203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.626 [2024-11-26 20:39:10.817376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.626 [2024-11-26 20:39:10.817519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.626 [2024-11-26 20:39:10.817545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.626 [2024-11-26 20:39:10.822737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.626 [2024-11-26 20:39:10.822814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.627 [2024-11-26 20:39:10.822839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.627 [2024-11-26 20:39:10.828056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.627 [2024-11-26 20:39:10.828164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.627 [2024-11-26 20:39:10.828191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.627 [2024-11-26 20:39:10.833362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.627 [2024-11-26 20:39:10.833468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.627 [2024-11-26 20:39:10.833493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.627 [2024-11-26 20:39:10.838414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.627 [2024-11-26 20:39:10.838485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.627 [2024-11-26 20:39:10.838506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.627 [2024-11-26 20:39:10.843585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.627 [2024-11-26 20:39:10.843666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.627 [2024-11-26 20:39:10.843686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.627 [2024-11-26 20:39:10.848623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.627 [2024-11-26 20:39:10.848696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.627 [2024-11-26 20:39:10.848717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.627 [2024-11-26 20:39:10.853769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.627 [2024-11-26 20:39:10.853866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.627 [2024-11-26 20:39:10.853889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.627 [2024-11-26 20:39:10.858858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.627 [2024-11-26 20:39:10.858931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.627 [2024-11-26 20:39:10.858952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.627 [2024-11-26 20:39:10.864641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.627 [2024-11-26 20:39:10.864756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.627 [2024-11-26 20:39:10.864779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.627 [2024-11-26 20:39:10.870406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.627 [2024-11-26 20:39:10.870543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.627 [2024-11-26 20:39:10.870566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.627 [2024-11-26 20:39:10.876519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.627 [2024-11-26 20:39:10.876646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.627 [2024-11-26 20:39:10.876668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.627 [2024-11-26 20:39:10.882631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.627 [2024-11-26 20:39:10.882731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.627 [2024-11-26 20:39:10.882753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.627 [2024-11-26 20:39:10.888410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.627 [2024-11-26 20:39:10.888491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.627 [2024-11-26 20:39:10.888514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.627 [2024-11-26 20:39:10.894235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.627 [2024-11-26 20:39:10.894350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.627 [2024-11-26 20:39:10.894374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.627 [2024-11-26 20:39:10.900085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.627 [2024-11-26 20:39:10.900184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.627 [2024-11-26 20:39:10.900223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.627 [2024-11-26 20:39:10.905995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.627 [2024-11-26 20:39:10.906084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.627 [2024-11-26 20:39:10.906105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.627 [2024-11-26 20:39:10.911913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.627 [2024-11-26 20:39:10.911987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.627 [2024-11-26 20:39:10.912024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.627 [2024-11-26 20:39:10.917326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.627 [2024-11-26 20:39:10.917438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.627 [2024-11-26 20:39:10.917462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.627 [2024-11-26 20:39:10.923119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.627 [2024-11-26 20:39:10.923205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.627 [2024-11-26 20:39:10.923229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.627 [2024-11-26 20:39:10.928833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.627 [2024-11-26 20:39:10.928940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.627 [2024-11-26 20:39:10.928961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.627 [2024-11-26 20:39:10.934864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.627 [2024-11-26 20:39:10.934944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.627 [2024-11-26 20:39:10.934967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.628 [2024-11-26 20:39:10.940819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.628 [2024-11-26 20:39:10.940908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.628 [2024-11-26 20:39:10.940929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.628 [2024-11-26 20:39:10.946820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.628 [2024-11-26 20:39:10.946922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.628 [2024-11-26 20:39:10.946943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.628 [2024-11-26 20:39:10.952615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.628 [2024-11-26 20:39:10.952714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.628 [2024-11-26 20:39:10.952735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.628 [2024-11-26 20:39:10.958290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.628 [2024-11-26 20:39:10.958385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.628 [2024-11-26 20:39:10.958409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.628 [2024-11-26 20:39:10.963935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.628 [2024-11-26 20:39:10.964026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.628 [2024-11-26 20:39:10.964048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.628 [2024-11-26 20:39:10.969480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.628 [2024-11-26 20:39:10.969554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.628 [2024-11-26 20:39:10.969576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.628 [2024-11-26 20:39:10.974795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.628 [2024-11-26 20:39:10.974872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.628 [2024-11-26 20:39:10.974894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.888 [2024-11-26 20:39:10.980104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.888 [2024-11-26 20:39:10.980205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.888 [2024-11-26 20:39:10.980248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.888 [2024-11-26 20:39:10.985519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.888 [2024-11-26 20:39:10.985626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.888 [2024-11-26 20:39:10.985650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.888 [2024-11-26 20:39:10.991069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.888 [2024-11-26 20:39:10.991193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.888 [2024-11-26 20:39:10.991215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.888 [2024-11-26 20:39:10.996666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.888 [2024-11-26 20:39:10.996768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.888 [2024-11-26 20:39:10.996791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.888 [2024-11-26 20:39:11.002007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.888 [2024-11-26 20:39:11.002100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.888 [2024-11-26 20:39:11.002122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.888 [2024-11-26 20:39:11.007380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.888 [2024-11-26 20:39:11.007486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.888 [2024-11-26 20:39:11.007508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.888 [2024-11-26 20:39:11.012867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.888 [2024-11-26 20:39:11.012962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.888 [2024-11-26 20:39:11.012983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.888 [2024-11-26 20:39:11.018063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.888 [2024-11-26 20:39:11.018137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.888 [2024-11-26 20:39:11.018159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.888 [2024-11-26 20:39:11.023323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.888 [2024-11-26 20:39:11.023425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.888 [2024-11-26 20:39:11.023447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.888 [2024-11-26 20:39:11.028640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.888 [2024-11-26 20:39:11.028735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.888 [2024-11-26 20:39:11.028757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.888 5301.00 IOPS, 662.62 MiB/s [2024-11-26T20:39:11.243Z] [2024-11-26 20:39:11.035621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a5b0) with pdu=0x200016eff3c8 00:18:10.888 [2024-11-26 20:39:11.035705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.888 [2024-11-26 20:39:11.035727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.888 00:18:10.888 Latency(us) 00:18:10.888 [2024-11-26T20:39:11.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.888 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:10.888 nvme0n1 : 2.00 5297.88 662.24 0.00 0.00 3013.53 2293.76 8579.26 00:18:10.888 [2024-11-26T20:39:11.243Z] =================================================================================================================== 00:18:10.889 [2024-11-26T20:39:11.244Z] Total : 5297.88 662.24 0.00 0.00 3013.53 2293.76 8579.26 00:18:10.889 { 00:18:10.889 "results": [ 00:18:10.889 { 00:18:10.889 "job": "nvme0n1", 00:18:10.889 "core_mask": "0x2", 00:18:10.889 "workload": "randwrite", 00:18:10.889 "status": "finished", 00:18:10.889 "queue_depth": 16, 00:18:10.889 "io_size": 131072, 00:18:10.889 "runtime": 2.004197, 00:18:10.889 "iops": 5297.882393796618, 00:18:10.889 "mibps": 662.2352992245773, 00:18:10.889 "io_failed": 0, 00:18:10.889 "io_timeout": 0, 00:18:10.889 "avg_latency_us": 3013.5314043048684, 00:18:10.889 "min_latency_us": 2293.76, 00:18:10.889 "max_latency_us": 8579.258181818182 00:18:10.889 } 00:18:10.889 ], 00:18:10.889 "core_count": 1 00:18:10.889 } 00:18:10.889 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:10.889 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:10.889 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:10.889 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:10.889 | .driver_specific 00:18:10.889 | .nvme_error 00:18:10.889 | .status_code 00:18:10.889 | .command_transient_transport_error' 00:18:11.148 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 343 > 0 )) 00:18:11.148 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80617 00:18:11.148 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80617 ']' 00:18:11.148 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80617 00:18:11.148 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:11.148 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.148 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80617 00:18:11.148 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:11.148 killing process with pid 80617 00:18:11.148 Received shutdown signal, test time was about 2.000000 seconds 00:18:11.148 00:18:11.148 Latency(us) 00:18:11.148 [2024-11-26T20:39:11.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.148 [2024-11-26T20:39:11.503Z] =================================================================================================================== 00:18:11.148 [2024-11-26T20:39:11.503Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:11.148 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:11.148 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80617' 00:18:11.148 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80617 00:18:11.148 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80617 00:18:11.408 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80419 00:18:11.408 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80419 ']' 00:18:11.408 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80419 00:18:11.408 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:11.408 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.408 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80419 00:18:11.408 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:11.408 killing process with pid 80419 00:18:11.408 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:11.408 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80419' 00:18:11.408 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80419 00:18:11.408 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80419 00:18:11.667 00:18:11.667 real 0m18.046s 00:18:11.667 user 0m35.535s 00:18:11.667 sys 0m4.958s 00:18:11.667 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:11.667 20:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:11.667 ************************************ 00:18:11.667 END TEST nvmf_digest_error 00:18:11.667 ************************************ 00:18:11.667 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:11.667 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:11.667 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:11.667 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:11.928 rmmod nvme_tcp 00:18:11.928 rmmod nvme_fabrics 00:18:11.928 rmmod nvme_keyring 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80419 ']' 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80419 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 80419 ']' 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 80419 00:18:11.928 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80419) - No such process 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 80419 is not found' 00:18:11.928 Process with pid 80419 is not found 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:11.928 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:12.190 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:12.190 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:12.190 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:12.190 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:12.190 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:12.190 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.190 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:12.190 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.190 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:18:12.190 00:18:12.190 real 0m35.766s 00:18:12.190 user 1m8.473s 00:18:12.190 sys 0m9.997s 00:18:12.190 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:12.190 ************************************ 00:18:12.190 END TEST nvmf_digest 00:18:12.190 ************************************ 00:18:12.190 20:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:12.190 20:39:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:18:12.190 20:39:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:18:12.190 20:39:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:12.190 20:39:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:12.190 20:39:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:12.190 20:39:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.190 ************************************ 00:18:12.190 START TEST nvmf_host_multipath 00:18:12.190 ************************************ 00:18:12.190 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:12.450 * Looking for test storage... 00:18:12.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:12.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.450 --rc genhtml_branch_coverage=1 00:18:12.450 --rc genhtml_function_coverage=1 00:18:12.450 --rc genhtml_legend=1 00:18:12.450 --rc geninfo_all_blocks=1 00:18:12.450 --rc geninfo_unexecuted_blocks=1 00:18:12.450 00:18:12.450 ' 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:12.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.450 --rc genhtml_branch_coverage=1 00:18:12.450 --rc genhtml_function_coverage=1 00:18:12.450 --rc genhtml_legend=1 00:18:12.450 --rc geninfo_all_blocks=1 00:18:12.450 --rc geninfo_unexecuted_blocks=1 00:18:12.450 00:18:12.450 ' 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:12.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.450 --rc genhtml_branch_coverage=1 00:18:12.450 --rc genhtml_function_coverage=1 00:18:12.450 --rc genhtml_legend=1 00:18:12.450 --rc geninfo_all_blocks=1 00:18:12.450 --rc geninfo_unexecuted_blocks=1 00:18:12.450 00:18:12.450 ' 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:12.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.450 --rc genhtml_branch_coverage=1 00:18:12.450 --rc genhtml_function_coverage=1 00:18:12.450 --rc genhtml_legend=1 00:18:12.450 --rc geninfo_all_blocks=1 00:18:12.450 --rc geninfo_unexecuted_blocks=1 00:18:12.450 00:18:12.450 ' 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:12.450 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:12.451 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:12.451 Cannot find device "nvmf_init_br" 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:12.451 Cannot find device "nvmf_init_br2" 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:12.451 Cannot find device "nvmf_tgt_br" 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:12.451 Cannot find device "nvmf_tgt_br2" 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:12.451 Cannot find device "nvmf_init_br" 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:12.451 Cannot find device "nvmf_init_br2" 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:12.451 Cannot find device "nvmf_tgt_br" 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:12.451 Cannot find device "nvmf_tgt_br2" 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:18:12.451 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:12.710 Cannot find device "nvmf_br" 00:18:12.710 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:18:12.710 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:12.710 Cannot find device "nvmf_init_if" 00:18:12.710 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:18:12.710 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:12.710 Cannot find device "nvmf_init_if2" 00:18:12.710 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:18:12.710 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:12.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:12.710 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:18:12.710 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:12.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:12.710 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:18:12.710 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:12.710 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:12.710 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:12.710 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:12.710 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:12.710 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:12.710 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:12.710 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:12.710 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:12.710 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:12.710 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:12.710 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:12.710 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:12.710 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:12.710 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:12.711 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:12.711 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:12.711 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:12.711 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:12.711 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:12.711 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:12.711 20:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:12.711 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:12.711 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:12.711 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:12.711 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:12.711 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:12.711 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:12.711 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:12.711 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:12.711 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:12.711 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:12.711 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:12.711 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:12.711 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:18:12.711 00:18:12.711 --- 10.0.0.3 ping statistics --- 00:18:12.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.711 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:18:12.711 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:12.970 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:12.970 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:18:12.970 00:18:12.970 --- 10.0.0.4 ping statistics --- 00:18:12.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.970 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:18:12.970 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:12.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:12.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:18:12.970 00:18:12.970 --- 10.0.0.1 ping statistics --- 00:18:12.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.970 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:12.970 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:12.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:12.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:18:12.970 00:18:12.970 --- 10.0.0.2 ping statistics --- 00:18:12.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.970 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:18:12.970 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:12.970 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:18:12.970 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:12.970 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:12.970 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:12.970 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:12.970 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:12.970 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:12.970 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:12.970 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:12.970 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:12.970 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:12.970 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:12.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.970 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80940 00:18:12.970 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:12.970 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80940 00:18:12.970 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80940 ']' 00:18:12.970 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.970 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:12.970 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.970 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:12.970 20:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:12.970 [2024-11-26 20:39:13.180449] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:18:12.970 [2024-11-26 20:39:13.180578] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.230 [2024-11-26 20:39:13.336273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:13.230 [2024-11-26 20:39:13.426102] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.230 [2024-11-26 20:39:13.426501] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.230 [2024-11-26 20:39:13.426691] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.230 [2024-11-26 20:39:13.426976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.230 [2024-11-26 20:39:13.427028] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.230 [2024-11-26 20:39:13.428845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.230 [2024-11-26 20:39:13.428859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.230 [2024-11-26 20:39:13.507343] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:14.166 20:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.166 20:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:18:14.166 20:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:14.166 20:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:14.166 20:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:14.166 20:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.166 20:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80940 00:18:14.166 20:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:14.166 [2024-11-26 20:39:14.494281] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:14.166 20:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:14.733 Malloc0 00:18:14.733 20:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:14.733 20:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:14.992 20:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:15.252 [2024-11-26 20:39:15.586980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:15.512 20:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:15.512 [2024-11-26 20:39:15.839078] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:15.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:15.512 20:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80996 00:18:15.512 20:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:15.512 20:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:15.512 20:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80996 /var/tmp/bdevperf.sock 00:18:15.512 20:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80996 ']' 00:18:15.512 20:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:15.512 20:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.512 20:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:15.771 20:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.771 20:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:16.030 20:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.030 20:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:18:16.030 20:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:16.289 20:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:16.549 Nvme0n1 00:18:16.549 20:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:17.118 Nvme0n1 00:18:17.118 20:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:17.118 20:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:18.055 20:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:18.055 20:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:18.314 20:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:18.573 20:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:18.573 20:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81033 00:18:18.573 20:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:18.573 20:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80940 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:25.164 20:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:25.164 20:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:25.164 20:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:25.164 20:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:25.164 Attaching 4 probes... 00:18:25.164 @path[10.0.0.3, 4421]: 17492 00:18:25.164 @path[10.0.0.3, 4421]: 18043 00:18:25.164 @path[10.0.0.3, 4421]: 17313 00:18:25.164 @path[10.0.0.3, 4421]: 17512 00:18:25.164 @path[10.0.0.3, 4421]: 17620 00:18:25.164 20:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:25.164 20:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:25.164 20:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:25.164 20:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:25.164 20:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:25.164 20:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:25.164 20:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81033 00:18:25.164 20:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:25.164 20:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:25.164 20:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:25.164 20:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:25.423 20:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:25.423 20:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81152 00:18:25.423 20:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80940 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:25.423 20:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:32.014 20:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:32.014 20:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:32.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:32.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:32.014 Attaching 4 probes... 00:18:32.014 @path[10.0.0.3, 4420]: 17389 00:18:32.014 @path[10.0.0.3, 4420]: 17550 00:18:32.014 @path[10.0.0.3, 4420]: 17982 00:18:32.014 @path[10.0.0.3, 4420]: 17568 00:18:32.014 @path[10.0.0.3, 4420]: 18502 00:18:32.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:32.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:32.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:32.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:32.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:32.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:32.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81152 00:18:32.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:32.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:32.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:32.014 20:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:32.274 20:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:32.274 20:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81266 00:18:32.274 20:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80940 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:32.274 20:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:38.844 20:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:38.844 20:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:38.845 20:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:38.845 20:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:38.845 Attaching 4 probes... 00:18:38.845 @path[10.0.0.3, 4421]: 14238 00:18:38.845 @path[10.0.0.3, 4421]: 17280 00:18:38.845 @path[10.0.0.3, 4421]: 17177 00:18:38.845 @path[10.0.0.3, 4421]: 17272 00:18:38.845 @path[10.0.0.3, 4421]: 17567 00:18:38.845 20:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:38.845 20:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:38.845 20:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:38.845 20:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:38.845 20:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:38.845 20:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:38.845 20:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81266 00:18:38.845 20:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:38.845 20:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:38.845 20:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:39.103 20:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:39.361 20:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:39.361 20:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81384 00:18:39.361 20:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80940 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:39.361 20:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:45.923 20:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:45.923 20:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:45.923 20:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:45.923 20:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:45.923 Attaching 4 probes... 00:18:45.923 00:18:45.923 00:18:45.923 00:18:45.923 00:18:45.923 00:18:45.923 20:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:45.923 20:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:45.923 20:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:45.923 20:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:45.923 20:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:45.923 20:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:45.923 20:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81384 00:18:45.923 20:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:45.923 20:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:45.923 20:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:45.923 20:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:46.182 20:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:46.182 20:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80940 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:46.182 20:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81496 00:18:46.182 20:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:52.752 20:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:52.752 20:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:52.752 20:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:52.752 20:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:52.752 Attaching 4 probes... 00:18:52.752 @path[10.0.0.3, 4421]: 16047 00:18:52.752 @path[10.0.0.3, 4421]: 17232 00:18:52.752 @path[10.0.0.3, 4421]: 17487 00:18:52.752 @path[10.0.0.3, 4421]: 16836 00:18:52.752 @path[10.0.0.3, 4421]: 17522 00:18:52.752 20:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:52.752 20:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:52.752 20:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:52.752 20:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:52.752 20:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:52.752 20:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:52.752 20:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81496 00:18:52.752 20:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:52.752 20:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:52.752 20:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:18:53.753 20:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:53.753 20:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81620 00:18:53.753 20:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80940 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:53.753 20:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:00.338 20:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:00.338 20:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:00.338 20:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:00.338 20:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:00.338 Attaching 4 probes... 00:19:00.338 @path[10.0.0.3, 4420]: 16556 00:19:00.338 @path[10.0.0.3, 4420]: 16909 00:19:00.338 @path[10.0.0.3, 4420]: 17124 00:19:00.338 @path[10.0.0.3, 4420]: 17888 00:19:00.338 @path[10.0.0.3, 4420]: 17430 00:19:00.338 20:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:00.338 20:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:00.338 20:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:00.338 20:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:00.338 20:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:00.338 20:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:00.338 20:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81620 00:19:00.338 20:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:00.338 20:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:00.338 [2024-11-26 20:40:00.572149] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:00.338 20:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:00.680 20:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:07.239 20:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:07.239 20:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81794 00:19:07.239 20:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80940 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:07.239 20:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:13.815 20:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:13.815 20:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:13.815 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:13.815 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:13.815 Attaching 4 probes... 00:19:13.815 @path[10.0.0.3, 4421]: 17421 00:19:13.815 @path[10.0.0.3, 4421]: 17544 00:19:13.815 @path[10.0.0.3, 4421]: 17304 00:19:13.815 @path[10.0.0.3, 4421]: 17791 00:19:13.815 @path[10.0.0.3, 4421]: 17357 00:19:13.815 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:13.815 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:13.815 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:13.815 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:13.815 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:13.815 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:13.815 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81794 00:19:13.815 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:13.815 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80996 00:19:13.815 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80996 ']' 00:19:13.815 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80996 00:19:13.815 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:19:13.815 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:13.815 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80996 00:19:13.815 killing process with pid 80996 00:19:13.815 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:13.815 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:13.815 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80996' 00:19:13.815 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80996 00:19:13.815 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80996 00:19:13.815 { 00:19:13.815 "results": [ 00:19:13.815 { 00:19:13.815 "job": "Nvme0n1", 00:19:13.815 "core_mask": "0x4", 00:19:13.815 "workload": "verify", 00:19:13.815 "status": "terminated", 00:19:13.815 "verify_range": { 00:19:13.815 "start": 0, 00:19:13.815 "length": 16384 00:19:13.815 }, 00:19:13.815 "queue_depth": 128, 00:19:13.815 "io_size": 4096, 00:19:13.815 "runtime": 55.870747, 00:19:13.815 "iops": 7473.660590219064, 00:19:13.815 "mibps": 29.19398668054322, 00:19:13.815 "io_failed": 0, 00:19:13.815 "io_timeout": 0, 00:19:13.815 "avg_latency_us": 17096.74168723897, 00:19:13.815 "min_latency_us": 177.80363636363637, 00:19:13.815 "max_latency_us": 7046430.72 00:19:13.815 } 00:19:13.815 ], 00:19:13.815 "core_count": 1 00:19:13.815 } 00:19:13.815 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80996 00:19:13.815 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:13.815 [2024-11-26 20:39:15.914948] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:19:13.815 [2024-11-26 20:39:15.915084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80996 ] 00:19:13.815 [2024-11-26 20:39:16.065383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.815 [2024-11-26 20:39:16.134167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.815 [2024-11-26 20:39:16.194449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:13.815 Running I/O for 90 seconds... 00:19:13.815 8391.00 IOPS, 32.78 MiB/s [2024-11-26T20:40:14.170Z] 8895.00 IOPS, 34.75 MiB/s [2024-11-26T20:40:14.170Z] 8927.33 IOPS, 34.87 MiB/s [2024-11-26T20:40:14.170Z] 8957.50 IOPS, 34.99 MiB/s [2024-11-26T20:40:14.170Z] 8898.00 IOPS, 34.76 MiB/s [2024-11-26T20:40:14.170Z] 8874.33 IOPS, 34.67 MiB/s [2024-11-26T20:40:14.170Z] 8867.14 IOPS, 34.64 MiB/s [2024-11-26T20:40:14.170Z] 8879.75 IOPS, 34.69 MiB/s [2024-11-26T20:40:14.170Z] [2024-11-26 20:39:25.694339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.815 [2024-11-26 20:39:25.694429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:13.815 [2024-11-26 20:39:25.694507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.815 [2024-11-26 20:39:25.694530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:13.815 [2024-11-26 20:39:25.694553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.815 [2024-11-26 20:39:25.694571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:13.815 [2024-11-26 20:39:25.694593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.815 [2024-11-26 20:39:25.694608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:13.815 [2024-11-26 20:39:25.694630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.815 [2024-11-26 20:39:25.694645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:13.815 [2024-11-26 20:39:25.694682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.815 [2024-11-26 20:39:25.694697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:13.815 [2024-11-26 20:39:25.694718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.815 [2024-11-26 20:39:25.694733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:13.815 [2024-11-26 20:39:25.694753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.815 [2024-11-26 20:39:25.694770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:13.815 [2024-11-26 20:39:25.694810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.815 [2024-11-26 20:39:25.694826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:13.815 [2024-11-26 20:39:25.694847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.816 [2024-11-26 20:39:25.694890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.694913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.816 [2024-11-26 20:39:25.694928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.694948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.816 [2024-11-26 20:39:25.694962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.694982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:69616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.816 [2024-11-26 20:39:25.694997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.695016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:69624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.816 [2024-11-26 20:39:25.695031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.695050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.816 [2024-11-26 20:39:25.695065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.695086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.816 [2024-11-26 20:39:25.695101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.695121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:69648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.816 [2024-11-26 20:39:25.695135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.695156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.816 [2024-11-26 20:39:25.695172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.695191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.816 [2024-11-26 20:39:25.695206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.695226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.816 [2024-11-26 20:39:25.695240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.695261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:69680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.816 [2024-11-26 20:39:25.695293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.695314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:69688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.816 [2024-11-26 20:39:25.695338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.695361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.816 [2024-11-26 20:39:25.695376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.695396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.816 [2024-11-26 20:39:25.695411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.695431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.816 [2024-11-26 20:39:25.695462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.695483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:69720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.816 [2024-11-26 20:39:25.695498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.695519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.816 [2024-11-26 20:39:25.695535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.695582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.816 [2024-11-26 20:39:25.695600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.695622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.816 [2024-11-26 20:39:25.695638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.695659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:69752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.816 [2024-11-26 20:39:25.695675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.695696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:69760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.816 [2024-11-26 20:39:25.695712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.695733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.816 [2024-11-26 20:39:25.695749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.695770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:69776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.816 [2024-11-26 20:39:25.695786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.695811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.816 [2024-11-26 20:39:25.695828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.695858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.816 [2024-11-26 20:39:25.695874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.695910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.816 [2024-11-26 20:39:25.695925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.695947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.816 [2024-11-26 20:39:25.695962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.695982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.816 [2024-11-26 20:39:25.696004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.696024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.816 [2024-11-26 20:39:25.696039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.696060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.816 [2024-11-26 20:39:25.696076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.696104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.816 [2024-11-26 20:39:25.696121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:13.816 [2024-11-26 20:39:25.696141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.816 [2024-11-26 20:39:25.696157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.696177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.817 [2024-11-26 20:39:25.696193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.696213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.817 [2024-11-26 20:39:25.696228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.696249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.817 [2024-11-26 20:39:25.696279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.696315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.817 [2024-11-26 20:39:25.696331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.696395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.817 [2024-11-26 20:39:25.696414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.696435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:69816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.817 [2024-11-26 20:39:25.696450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.696470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:69824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.817 [2024-11-26 20:39:25.696485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.696507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.817 [2024-11-26 20:39:25.696523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.696543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.817 [2024-11-26 20:39:25.696574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.696613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.817 [2024-11-26 20:39:25.696628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.696650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:69856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.817 [2024-11-26 20:39:25.696665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.696686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:69864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.817 [2024-11-26 20:39:25.696702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.696724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.817 [2024-11-26 20:39:25.696741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.696762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.817 [2024-11-26 20:39:25.696778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.696799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.817 [2024-11-26 20:39:25.696815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.696836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.817 [2024-11-26 20:39:25.696852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.696874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.817 [2024-11-26 20:39:25.696896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.696920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.817 [2024-11-26 20:39:25.696936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.696957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.817 [2024-11-26 20:39:25.696974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.697010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.817 [2024-11-26 20:39:25.697026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.697047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.817 [2024-11-26 20:39:25.697062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.697082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.817 [2024-11-26 20:39:25.697098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.697118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.817 [2024-11-26 20:39:25.697134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.697156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.817 [2024-11-26 20:39:25.697171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.697192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.817 [2024-11-26 20:39:25.697207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.697228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.817 [2024-11-26 20:39:25.697244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.697264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.817 [2024-11-26 20:39:25.697291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.697316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.817 [2024-11-26 20:39:25.697332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.697353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.817 [2024-11-26 20:39:25.697376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:13.817 [2024-11-26 20:39:25.697414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.817 [2024-11-26 20:39:25.697431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.697452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.818 [2024-11-26 20:39:25.697469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.697490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.818 [2024-11-26 20:39:25.697506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.697527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.818 [2024-11-26 20:39:25.697543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.697565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.818 [2024-11-26 20:39:25.697581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.697602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.818 [2024-11-26 20:39:25.697617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.697640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.818 [2024-11-26 20:39:25.697656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.697678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.818 [2024-11-26 20:39:25.697693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.697715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.818 [2024-11-26 20:39:25.697732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.697758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.818 [2024-11-26 20:39:25.697776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.697813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.818 [2024-11-26 20:39:25.697829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.697849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.818 [2024-11-26 20:39:25.697865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.697892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.818 [2024-11-26 20:39:25.697909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.697930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.818 [2024-11-26 20:39:25.697945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.697965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.818 [2024-11-26 20:39:25.697981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.698002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.818 [2024-11-26 20:39:25.698017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.698037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.818 [2024-11-26 20:39:25.698053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.698074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.818 [2024-11-26 20:39:25.698090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.698110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.818 [2024-11-26 20:39:25.698126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.698146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.818 [2024-11-26 20:39:25.698161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.698182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.818 [2024-11-26 20:39:25.698197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.698218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.818 [2024-11-26 20:39:25.698249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.698283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.818 [2024-11-26 20:39:25.698302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.698324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.818 [2024-11-26 20:39:25.698340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.698368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.818 [2024-11-26 20:39:25.698385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.698407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.818 [2024-11-26 20:39:25.698423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.698445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.818 [2024-11-26 20:39:25.698461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.698483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.818 [2024-11-26 20:39:25.698499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.698521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.818 [2024-11-26 20:39:25.698537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.698558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.818 [2024-11-26 20:39:25.698574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.698595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.818 [2024-11-26 20:39:25.698611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.698632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.818 [2024-11-26 20:39:25.698648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:13.818 [2024-11-26 20:39:25.698669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.819 [2024-11-26 20:39:25.698685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:25.698706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.819 [2024-11-26 20:39:25.698723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:25.698744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:70008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.819 [2024-11-26 20:39:25.698774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:25.698795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.819 [2024-11-26 20:39:25.698810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:25.698831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.819 [2024-11-26 20:39:25.698852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:25.698874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.819 [2024-11-26 20:39:25.698890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:25.698911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.819 [2024-11-26 20:39:25.698927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:25.698948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.819 [2024-11-26 20:39:25.698964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:25.700593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.819 [2024-11-26 20:39:25.700627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:25.700658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.819 [2024-11-26 20:39:25.700676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:25.700700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.819 [2024-11-26 20:39:25.700741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:25.700763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.819 [2024-11-26 20:39:25.700779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:25.700799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.819 [2024-11-26 20:39:25.700815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:25.700835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.819 [2024-11-26 20:39:25.700851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:25.700871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.819 [2024-11-26 20:39:25.700887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:25.700907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.819 [2024-11-26 20:39:25.700923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:25.700959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.819 [2024-11-26 20:39:25.700993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:25.701017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.819 [2024-11-26 20:39:25.701033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:25.701054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.819 [2024-11-26 20:39:25.701070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:25.701091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.819 [2024-11-26 20:39:25.701107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:25.701127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.819 [2024-11-26 20:39:25.701142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:25.701163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.819 [2024-11-26 20:39:25.701178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:25.701199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.819 [2024-11-26 20:39:25.701214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:25.701236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.819 [2024-11-26 20:39:25.701267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:25.701290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.819 [2024-11-26 20:39:25.701307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:13.819 8860.33 IOPS, 34.61 MiB/s [2024-11-26T20:40:14.174Z] 8859.90 IOPS, 34.61 MiB/s [2024-11-26T20:40:14.174Z] 8853.00 IOPS, 34.58 MiB/s [2024-11-26T20:40:14.174Z] 8863.92 IOPS, 34.62 MiB/s [2024-11-26T20:40:14.174Z] 8861.46 IOPS, 34.62 MiB/s [2024-11-26T20:40:14.174Z] 8885.07 IOPS, 34.71 MiB/s [2024-11-26T20:40:14.174Z] [2024-11-26 20:39:32.332414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.819 [2024-11-26 20:39:32.332500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:32.332577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.819 [2024-11-26 20:39:32.332604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:32.332628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.819 [2024-11-26 20:39:32.332644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:13.819 [2024-11-26 20:39:32.332692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.820 [2024-11-26 20:39:32.332711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:13.820 [2024-11-26 20:39:32.332732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.820 [2024-11-26 20:39:32.332747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:13.820 [2024-11-26 20:39:32.332768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.820 [2024-11-26 20:39:32.332799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:13.820 [2024-11-26 20:39:32.332819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.820 [2024-11-26 20:39:32.332848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:13.820 [2024-11-26 20:39:32.332883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.820 [2024-11-26 20:39:32.332897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:13.820 [2024-11-26 20:39:32.332917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.820 [2024-11-26 20:39:32.332931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:13.820 [2024-11-26 20:39:32.332950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.820 [2024-11-26 20:39:32.332964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:13.820 [2024-11-26 20:39:32.332983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.820 [2024-11-26 20:39:32.332997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:13.820 [2024-11-26 20:39:32.333016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.820 [2024-11-26 20:39:32.333030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:13.820 [2024-11-26 20:39:32.333048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.820 [2024-11-26 20:39:32.333062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:13.820 [2024-11-26 20:39:32.333081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.820 [2024-11-26 20:39:32.333095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:13.820 [2024-11-26 20:39:32.333115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.820 [2024-11-26 20:39:32.333128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:13.820 [2024-11-26 20:39:32.333148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.820 [2024-11-26 20:39:32.333170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:13.820 [2024-11-26 20:39:32.333190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.820 [2024-11-26 20:39:32.333215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:13.820 [2024-11-26 20:39:32.333237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.820 [2024-11-26 20:39:32.333252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:13.820 [2024-11-26 20:39:32.333271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.820 [2024-11-26 20:39:32.333301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:13.820 [2024-11-26 20:39:32.333323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.820 [2024-11-26 20:39:32.333338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:13.820 [2024-11-26 20:39:32.333358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.820 [2024-11-26 20:39:32.333373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:13.820 [2024-11-26 20:39:32.333392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.820 [2024-11-26 20:39:32.333437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:13.820 [2024-11-26 20:39:32.333474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.820 [2024-11-26 20:39:32.333490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:13.820 [2024-11-26 20:39:32.333512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.820 [2024-11-26 20:39:32.333527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:13.820 [2024-11-26 20:39:32.333662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.820 [2024-11-26 20:39:32.333686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:13.820 [2024-11-26 20:39:32.333708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.820 [2024-11-26 20:39:32.333725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:13.820 [2024-11-26 20:39:32.333746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.820 [2024-11-26 20:39:32.333762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.333798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.821 [2024-11-26 20:39:32.333837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.333859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.821 [2024-11-26 20:39:32.333874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.333892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.821 [2024-11-26 20:39:32.333907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.333926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.821 [2024-11-26 20:39:32.333940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.333960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.821 [2024-11-26 20:39:32.333974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.333993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.821 [2024-11-26 20:39:32.334007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.334028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.821 [2024-11-26 20:39:32.334043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.334062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.821 [2024-11-26 20:39:32.334076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.334096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.821 [2024-11-26 20:39:32.334110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.334129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.821 [2024-11-26 20:39:32.334144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.334163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.821 [2024-11-26 20:39:32.334177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.334196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.821 [2024-11-26 20:39:32.334219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.334256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.821 [2024-11-26 20:39:32.334271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.334298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.821 [2024-11-26 20:39:32.334328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.334353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.821 [2024-11-26 20:39:32.334370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.334390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.821 [2024-11-26 20:39:32.334437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.334458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.821 [2024-11-26 20:39:32.334473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.334495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.821 [2024-11-26 20:39:32.334510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.334532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.821 [2024-11-26 20:39:32.334556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.334592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.821 [2024-11-26 20:39:32.334607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.334627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.821 [2024-11-26 20:39:32.334659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.334680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.821 [2024-11-26 20:39:32.334695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.334717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.821 [2024-11-26 20:39:32.334733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.334754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.821 [2024-11-26 20:39:32.334785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.334819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.821 [2024-11-26 20:39:32.334834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.334862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.821 [2024-11-26 20:39:32.334878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.334898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.821 [2024-11-26 20:39:32.334912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.334932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.821 [2024-11-26 20:39:32.334947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.334983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.821 [2024-11-26 20:39:32.334998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.335018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.821 [2024-11-26 20:39:32.335033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:13.821 [2024-11-26 20:39:32.335071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.822 [2024-11-26 20:39:32.335086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.335108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.822 [2024-11-26 20:39:32.335123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.335144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.822 [2024-11-26 20:39:32.335160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.335181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.822 [2024-11-26 20:39:32.335197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.335218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.822 [2024-11-26 20:39:32.335234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.335255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.822 [2024-11-26 20:39:32.335271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.335292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.822 [2024-11-26 20:39:32.335308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.335329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.822 [2024-11-26 20:39:32.335381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.335408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.822 [2024-11-26 20:39:32.335425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.335446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.822 [2024-11-26 20:39:32.335462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.335483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.822 [2024-11-26 20:39:32.335499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.335521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.822 [2024-11-26 20:39:32.335536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.335568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.822 [2024-11-26 20:39:32.335586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.335611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.822 [2024-11-26 20:39:32.335627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.335649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.822 [2024-11-26 20:39:32.335665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.335707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.822 [2024-11-26 20:39:32.335727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.335751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.822 [2024-11-26 20:39:32.335767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.335789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.822 [2024-11-26 20:39:32.335804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.335825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.822 [2024-11-26 20:39:32.335859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.335879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.822 [2024-11-26 20:39:32.335903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.335926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.822 [2024-11-26 20:39:32.335941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.335962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.822 [2024-11-26 20:39:32.335977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.335997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.822 [2024-11-26 20:39:32.336026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.336046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.822 [2024-11-26 20:39:32.336061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.336082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.822 [2024-11-26 20:39:32.336097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.336116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.822 [2024-11-26 20:39:32.336131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.336151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.822 [2024-11-26 20:39:32.336165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.336185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.822 [2024-11-26 20:39:32.336215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.336236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.822 [2024-11-26 20:39:32.336250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.336283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.822 [2024-11-26 20:39:32.336302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:13.822 [2024-11-26 20:39:32.336323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.822 [2024-11-26 20:39:32.336338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.336359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.823 [2024-11-26 20:39:32.336375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.336420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.823 [2024-11-26 20:39:32.336436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.336457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.823 [2024-11-26 20:39:32.336472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.336494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.823 [2024-11-26 20:39:32.336509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.336530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.823 [2024-11-26 20:39:32.336545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.336566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.823 [2024-11-26 20:39:32.336581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.336603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.823 [2024-11-26 20:39:32.336618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.336639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.823 [2024-11-26 20:39:32.336654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.336675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.823 [2024-11-26 20:39:32.336690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.336712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.823 [2024-11-26 20:39:32.336742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.336777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.823 [2024-11-26 20:39:32.336792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.336812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.823 [2024-11-26 20:39:32.336826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.336846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.823 [2024-11-26 20:39:32.336861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.336911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.823 [2024-11-26 20:39:32.336927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.336947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.823 [2024-11-26 20:39:32.336962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.336982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.823 [2024-11-26 20:39:32.336996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.337016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.823 [2024-11-26 20:39:32.337030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.337051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.823 [2024-11-26 20:39:32.337065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.337085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.823 [2024-11-26 20:39:32.337099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.337119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.823 [2024-11-26 20:39:32.337134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.337154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.823 [2024-11-26 20:39:32.337168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.337188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.823 [2024-11-26 20:39:32.337203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.337223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.823 [2024-11-26 20:39:32.337237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.337257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.823 [2024-11-26 20:39:32.337283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.337305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.823 [2024-11-26 20:39:32.337320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.337341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.823 [2024-11-26 20:39:32.337363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.337384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.823 [2024-11-26 20:39:32.337416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.338174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.823 [2024-11-26 20:39:32.338203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.338235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.823 [2024-11-26 20:39:32.338261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.338304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.823 [2024-11-26 20:39:32.338323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:13.823 [2024-11-26 20:39:32.338351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.824 [2024-11-26 20:39:32.338366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:32.338392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.824 [2024-11-26 20:39:32.338440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:32.338468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.824 [2024-11-26 20:39:32.338484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:32.338512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.824 [2024-11-26 20:39:32.338528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:32.338557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.824 [2024-11-26 20:39:32.338573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:32.338626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.824 [2024-11-26 20:39:32.338646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:32.338675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.824 [2024-11-26 20:39:32.338691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:32.338720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.824 [2024-11-26 20:39:32.338747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:13.824 8880.47 IOPS, 34.69 MiB/s [2024-11-26T20:40:14.179Z] 8331.12 IOPS, 32.54 MiB/s [2024-11-26T20:40:14.179Z] 8362.00 IOPS, 32.66 MiB/s [2024-11-26T20:40:14.179Z] 8379.67 IOPS, 32.73 MiB/s [2024-11-26T20:40:14.179Z] 8392.84 IOPS, 32.78 MiB/s [2024-11-26T20:40:14.179Z] 8405.70 IOPS, 32.83 MiB/s [2024-11-26T20:40:14.179Z] 8424.48 IOPS, 32.91 MiB/s [2024-11-26T20:40:14.179Z] 8444.09 IOPS, 32.98 MiB/s [2024-11-26T20:40:14.179Z] [2024-11-26 20:39:39.525483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:56784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.824 [2024-11-26 20:39:39.525562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:39.525637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:56792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.824 [2024-11-26 20:39:39.525659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:39.525681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:56800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.824 [2024-11-26 20:39:39.525696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:39.525715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:56808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.824 [2024-11-26 20:39:39.525730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:39.525749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:56816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.824 [2024-11-26 20:39:39.525764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:39.525783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:56824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.824 [2024-11-26 20:39:39.525798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:39.525817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.824 [2024-11-26 20:39:39.525832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:39.525851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.824 [2024-11-26 20:39:39.525866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:39.525885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.824 [2024-11-26 20:39:39.525899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:39.525919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.824 [2024-11-26 20:39:39.525933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:39.525953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.824 [2024-11-26 20:39:39.525967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:39.526018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.824 [2024-11-26 20:39:39.526034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:39.526053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:56880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.824 [2024-11-26 20:39:39.526067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:39.526085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.824 [2024-11-26 20:39:39.526099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:39.526118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.824 [2024-11-26 20:39:39.526132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:39.526151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.824 [2024-11-26 20:39:39.526165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:39.526184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.824 [2024-11-26 20:39:39.526198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:39.526220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.824 [2024-11-26 20:39:39.526248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:39.526273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:56416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.824 [2024-11-26 20:39:39.526288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:39.526307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:56424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.824 [2024-11-26 20:39:39.526322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:39.526341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.824 [2024-11-26 20:39:39.526356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:13.824 [2024-11-26 20:39:39.526375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:56440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.825 [2024-11-26 20:39:39.526388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.526408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:56448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.825 [2024-11-26 20:39:39.526422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.526451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:56456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.825 [2024-11-26 20:39:39.526467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.526510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.825 [2024-11-26 20:39:39.526529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.526550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.825 [2024-11-26 20:39:39.526564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.526584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.825 [2024-11-26 20:39:39.526598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.526617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:56936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.825 [2024-11-26 20:39:39.526631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.526651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.825 [2024-11-26 20:39:39.526665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.526684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:56952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.825 [2024-11-26 20:39:39.526699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.526718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:56960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.825 [2024-11-26 20:39:39.526733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.526752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.825 [2024-11-26 20:39:39.526766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.526785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:56976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.825 [2024-11-26 20:39:39.526800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.526821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:56984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.825 [2024-11-26 20:39:39.526836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.526855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:56992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.825 [2024-11-26 20:39:39.526869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.526889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.825 [2024-11-26 20:39:39.526911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.526931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:57008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.825 [2024-11-26 20:39:39.526946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.526966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.825 [2024-11-26 20:39:39.526980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.527000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.825 [2024-11-26 20:39:39.527015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.527034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.825 [2024-11-26 20:39:39.527050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.527073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:57040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.825 [2024-11-26 20:39:39.527088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.527125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.825 [2024-11-26 20:39:39.527140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.527159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.825 [2024-11-26 20:39:39.527174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.527194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:57064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.825 [2024-11-26 20:39:39.527208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.527228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:56464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.825 [2024-11-26 20:39:39.527256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.527279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:56472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.825 [2024-11-26 20:39:39.527294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.527314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:56480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.825 [2024-11-26 20:39:39.527329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.527348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:56488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.825 [2024-11-26 20:39:39.527371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.527393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:56496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.825 [2024-11-26 20:39:39.527408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.527428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:56504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.825 [2024-11-26 20:39:39.527443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.527463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:56512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.825 [2024-11-26 20:39:39.527478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.527498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:56520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.825 [2024-11-26 20:39:39.527512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:13.825 [2024-11-26 20:39:39.527532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:56528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.825 [2024-11-26 20:39:39.527573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:13.826 [2024-11-26 20:39:39.527613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:56536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.826 [2024-11-26 20:39:39.527629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:13.826 [2024-11-26 20:39:39.527651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:56544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.826 [2024-11-26 20:39:39.527667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:13.826 [2024-11-26 20:39:39.527688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:56552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.826 [2024-11-26 20:39:39.527703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:13.826 [2024-11-26 20:39:39.527725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:56560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.826 [2024-11-26 20:39:39.527740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:13.826 [2024-11-26 20:39:39.527761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:56568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.826 [2024-11-26 20:39:39.527776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:13.826 [2024-11-26 20:39:39.527797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.826 [2024-11-26 20:39:39.527812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:13.826 [2024-11-26 20:39:39.527833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.826 [2024-11-26 20:39:39.527849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:13.826 [2024-11-26 20:39:39.527878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:56592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.826 [2024-11-26 20:39:39.527914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:13.826 [2024-11-26 20:39:39.527934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.826 [2024-11-26 20:39:39.527950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:13.826 [2024-11-26 20:39:39.527970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:56608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.826 [2024-11-26 20:39:39.527985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:13.826 [2024-11-26 20:39:39.528006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:56616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.826 [2024-11-26 20:39:39.528020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:13.826 [2024-11-26 20:39:39.528044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.826 [2024-11-26 20:39:39.528073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:13.826 [2024-11-26 20:39:39.528094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:56632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.826 [2024-11-26 20:39:39.528109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:13.826 [2024-11-26 20:39:39.528128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:56640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.826 [2024-11-26 20:39:39.528143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:13.826 [2024-11-26 20:39:39.528163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.826 [2024-11-26 20:39:39.528177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:13.826 [2024-11-26 20:39:39.528197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:57072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.826 [2024-11-26 20:39:39.528211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:13.826 [2024-11-26 20:39:39.528247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.826 [2024-11-26 20:39:39.528263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:13.826 [2024-11-26 20:39:39.528284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.826 [2024-11-26 20:39:39.528311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:13.826 [2024-11-26 20:39:39.528336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.826 [2024-11-26 20:39:39.528352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:13.826 [2024-11-26 20:39:39.528387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.826 [2024-11-26 20:39:39.528405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:13.826 [2024-11-26 20:39:39.528425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.826 [2024-11-26 20:39:39.528440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:13.826 [2024-11-26 20:39:39.528461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.826 [2024-11-26 20:39:39.528476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:13.826 [2024-11-26 20:39:39.528496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.826 [2024-11-26 20:39:39.528511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:13.826 [2024-11-26 20:39:39.528531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.826 [2024-11-26 20:39:39.528546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.528567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.827 [2024-11-26 20:39:39.528595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.528616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.827 [2024-11-26 20:39:39.528631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.528651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.827 [2024-11-26 20:39:39.528667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.528687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:56672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.827 [2024-11-26 20:39:39.528702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.528723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:56680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.827 [2024-11-26 20:39:39.528738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.528773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:56688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.827 [2024-11-26 20:39:39.528787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.528807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:56696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.827 [2024-11-26 20:39:39.528822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.528841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:56704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.827 [2024-11-26 20:39:39.528863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.528884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:56712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.827 [2024-11-26 20:39:39.528899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.528919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.827 [2024-11-26 20:39:39.528933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.528953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.827 [2024-11-26 20:39:39.528967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.528987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.827 [2024-11-26 20:39:39.529002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.529021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.827 [2024-11-26 20:39:39.529036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.529055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.827 [2024-11-26 20:39:39.529069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.529089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.827 [2024-11-26 20:39:39.529103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.529123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.827 [2024-11-26 20:39:39.529137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.529157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.827 [2024-11-26 20:39:39.529177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.529197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.827 [2024-11-26 20:39:39.529212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.529231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.827 [2024-11-26 20:39:39.529262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.529283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.827 [2024-11-26 20:39:39.529305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.529332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.827 [2024-11-26 20:39:39.529348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.529368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.827 [2024-11-26 20:39:39.529382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.529402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.827 [2024-11-26 20:39:39.529417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.529436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.827 [2024-11-26 20:39:39.529453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.529473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.827 [2024-11-26 20:39:39.529488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.529508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.827 [2024-11-26 20:39:39.529522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.529542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.827 [2024-11-26 20:39:39.529558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.529577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:56720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.827 [2024-11-26 20:39:39.529592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.529611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:56728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.827 [2024-11-26 20:39:39.529626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.529646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.827 [2024-11-26 20:39:39.529660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.529680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:56744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.827 [2024-11-26 20:39:39.529694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.827 [2024-11-26 20:39:39.529714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:56752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.828 [2024-11-26 20:39:39.529729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:39.529755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:56760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.828 [2024-11-26 20:39:39.529775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:39.529796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:56768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.828 [2024-11-26 20:39:39.529812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:39.530495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:56776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.828 [2024-11-26 20:39:39.530524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:39.530558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.828 [2024-11-26 20:39:39.530575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:39.530610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.828 [2024-11-26 20:39:39.530628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:39.530656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.828 [2024-11-26 20:39:39.530672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:39.530700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.828 [2024-11-26 20:39:39.530715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:39.530743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.828 [2024-11-26 20:39:39.530759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:39.530787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:57336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.828 [2024-11-26 20:39:39.530803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:39.530842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.828 [2024-11-26 20:39:39.530857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:39.530920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.828 [2024-11-26 20:39:39.530940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:39.530968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.828 [2024-11-26 20:39:39.530985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:39.531025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.828 [2024-11-26 20:39:39.531043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:39.531071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.828 [2024-11-26 20:39:39.531086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:39.531114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.828 [2024-11-26 20:39:39.531129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:39.531172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.828 [2024-11-26 20:39:39.531187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:39.531214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.828 [2024-11-26 20:39:39.531235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:39.531274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.828 [2024-11-26 20:39:39.531290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:39.531332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.828 [2024-11-26 20:39:39.531351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:13.828 8142.35 IOPS, 31.81 MiB/s [2024-11-26T20:40:14.183Z] 7803.08 IOPS, 30.48 MiB/s [2024-11-26T20:40:14.183Z] 7490.96 IOPS, 29.26 MiB/s [2024-11-26T20:40:14.183Z] 7202.85 IOPS, 28.14 MiB/s [2024-11-26T20:40:14.183Z] 6936.07 IOPS, 27.09 MiB/s [2024-11-26T20:40:14.183Z] 6688.36 IOPS, 26.13 MiB/s [2024-11-26T20:40:14.183Z] 6457.72 IOPS, 25.23 MiB/s [2024-11-26T20:40:14.183Z] 6448.90 IOPS, 25.19 MiB/s [2024-11-26T20:40:14.183Z] 6518.68 IOPS, 25.46 MiB/s [2024-11-26T20:40:14.183Z] 6585.34 IOPS, 25.72 MiB/s [2024-11-26T20:40:14.183Z] 6649.55 IOPS, 25.97 MiB/s [2024-11-26T20:40:14.183Z] 6702.68 IOPS, 26.18 MiB/s [2024-11-26T20:40:14.183Z] 6753.80 IOPS, 26.38 MiB/s [2024-11-26T20:40:14.183Z] [2024-11-26 20:39:52.985954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.828 [2024-11-26 20:39:52.986026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:52.986095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.828 [2024-11-26 20:39:52.986118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:52.986143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.828 [2024-11-26 20:39:52.986160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:52.986181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.828 [2024-11-26 20:39:52.986196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:52.986260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.828 [2024-11-26 20:39:52.986279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:52.986300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.828 [2024-11-26 20:39:52.986316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:52.986336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.828 [2024-11-26 20:39:52.986352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:52.986373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.828 [2024-11-26 20:39:52.986389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:52.986410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.828 [2024-11-26 20:39:52.986425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:52.986454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.828 [2024-11-26 20:39:52.986470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:13.828 [2024-11-26 20:39:52.986490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.828 [2024-11-26 20:39:52.986505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.986526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.829 [2024-11-26 20:39:52.986550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.986570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.829 [2024-11-26 20:39:52.986585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.986606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.829 [2024-11-26 20:39:52.986621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.986642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.829 [2024-11-26 20:39:52.986657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.986678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.829 [2024-11-26 20:39:52.986693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.986724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.829 [2024-11-26 20:39:52.986741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.986766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.829 [2024-11-26 20:39:52.986782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.986803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.829 [2024-11-26 20:39:52.986819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.986840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.829 [2024-11-26 20:39:52.986855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.986877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.829 [2024-11-26 20:39:52.986892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.986914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.829 [2024-11-26 20:39:52.986930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.986951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.829 [2024-11-26 20:39:52.986966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.986988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.829 [2024-11-26 20:39:52.987004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.987054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.829 [2024-11-26 20:39:52.987076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.987092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.829 [2024-11-26 20:39:52.987107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.987121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.829 [2024-11-26 20:39:52.987136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.987151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.829 [2024-11-26 20:39:52.987165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.987180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.829 [2024-11-26 20:39:52.987208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.987239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.829 [2024-11-26 20:39:52.987257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.987272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.829 [2024-11-26 20:39:52.987287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.987302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.829 [2024-11-26 20:39:52.987316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.987332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.829 [2024-11-26 20:39:52.987346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.987362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.829 [2024-11-26 20:39:52.987377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.987392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.829 [2024-11-26 20:39:52.987406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.987422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.829 [2024-11-26 20:39:52.987436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.987451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.829 [2024-11-26 20:39:52.987465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.987480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.829 [2024-11-26 20:39:52.987495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.987511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.829 [2024-11-26 20:39:52.987525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.987550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.829 [2024-11-26 20:39:52.987578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.987594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.829 [2024-11-26 20:39:52.987617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.987640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.829 [2024-11-26 20:39:52.987656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.829 [2024-11-26 20:39:52.987671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.830 [2024-11-26 20:39:52.987685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.987701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.830 [2024-11-26 20:39:52.987715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.987730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.830 [2024-11-26 20:39:52.987744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.987759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.830 [2024-11-26 20:39:52.987773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.987788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.830 [2024-11-26 20:39:52.987803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.987818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.830 [2024-11-26 20:39:52.987832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.987847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.830 [2024-11-26 20:39:52.987861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.987877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.830 [2024-11-26 20:39:52.987891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.987907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.830 [2024-11-26 20:39:52.987921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.987936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.830 [2024-11-26 20:39:52.987950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.987965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.830 [2024-11-26 20:39:52.987979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.987994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.830 [2024-11-26 20:39:52.988010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.988031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.830 [2024-11-26 20:39:52.988047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.988062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.830 [2024-11-26 20:39:52.988076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.988091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.830 [2024-11-26 20:39:52.988105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.988121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.830 [2024-11-26 20:39:52.988135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.988150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.830 [2024-11-26 20:39:52.988164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.988179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.830 [2024-11-26 20:39:52.988194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.988209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.830 [2024-11-26 20:39:52.988247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.988265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.830 [2024-11-26 20:39:52.988280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.988296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.830 [2024-11-26 20:39:52.988310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.988326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.830 [2024-11-26 20:39:52.988341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.988356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.830 [2024-11-26 20:39:52.988370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.988386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.830 [2024-11-26 20:39:52.988400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.988415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.830 [2024-11-26 20:39:52.988437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.988454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.830 [2024-11-26 20:39:52.988468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.988483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.830 [2024-11-26 20:39:52.988497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.988512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.830 [2024-11-26 20:39:52.988527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.988542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.830 [2024-11-26 20:39:52.988556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.988571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.830 [2024-11-26 20:39:52.988585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.988611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.830 [2024-11-26 20:39:52.988625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.830 [2024-11-26 20:39:52.988640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.831 [2024-11-26 20:39:52.988654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.988670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.831 [2024-11-26 20:39:52.988684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.988699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.831 [2024-11-26 20:39:52.988713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.988728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.831 [2024-11-26 20:39:52.988742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.988757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.831 [2024-11-26 20:39:52.988772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.988787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.831 [2024-11-26 20:39:52.988801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.988822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.831 [2024-11-26 20:39:52.988838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.988853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.831 [2024-11-26 20:39:52.988867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.988883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.831 [2024-11-26 20:39:52.988897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.988912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.831 [2024-11-26 20:39:52.988927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.988941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.831 [2024-11-26 20:39:52.988955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.988970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.831 [2024-11-26 20:39:52.988984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.988999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.831 [2024-11-26 20:39:52.989013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.989028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.831 [2024-11-26 20:39:52.989042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.989057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.831 [2024-11-26 20:39:52.989071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.989086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.831 [2024-11-26 20:39:52.989100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.989115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.831 [2024-11-26 20:39:52.989129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.989144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.831 [2024-11-26 20:39:52.989158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.989173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.831 [2024-11-26 20:39:52.989192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.989208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.831 [2024-11-26 20:39:52.989234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.989251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.831 [2024-11-26 20:39:52.989266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.989282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.831 [2024-11-26 20:39:52.989296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.989311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.831 [2024-11-26 20:39:52.989325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.989341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.831 [2024-11-26 20:39:52.989355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.989371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.831 [2024-11-26 20:39:52.989386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.989401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.831 [2024-11-26 20:39:52.989415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.989430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.831 [2024-11-26 20:39:52.989445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.989460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.831 [2024-11-26 20:39:52.989474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.989489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:13.831 [2024-11-26 20:39:52.989503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.989518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.831 [2024-11-26 20:39:52.989532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.989548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.831 [2024-11-26 20:39:52.989561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.831 [2024-11-26 20:39:52.989583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.832 [2024-11-26 20:39:52.989598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.832 [2024-11-26 20:39:52.989614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.832 [2024-11-26 20:39:52.989628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.832 [2024-11-26 20:39:52.989643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.832 [2024-11-26 20:39:52.989657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.832 [2024-11-26 20:39:52.989672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.832 [2024-11-26 20:39:52.989686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.832 [2024-11-26 20:39:52.989702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.832 [2024-11-26 20:39:52.989716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.832 [2024-11-26 20:39:52.989730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8e310 is same with the state(6) to be set 00:19:13.832 [2024-11-26 20:39:52.989748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.832 [2024-11-26 20:39:52.989759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.832 [2024-11-26 20:39:52.989770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99728 len:8 PRP1 0x0 PRP2 0x0 00:19:13.832 [2024-11-26 20:39:52.989783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.832 [2024-11-26 20:39:52.989797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.832 [2024-11-26 20:39:52.989807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.832 [2024-11-26 20:39:52.989817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100168 len:8 PRP1 0x0 PRP2 0x0 00:19:13.832 [2024-11-26 20:39:52.989831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.832 [2024-11-26 20:39:52.989844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.832 [2024-11-26 20:39:52.989854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.832 [2024-11-26 20:39:52.989864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100176 len:8 PRP1 0x0 PRP2 0x0 00:19:13.832 [2024-11-26 20:39:52.989884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.832 [2024-11-26 20:39:52.989897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.832 [2024-11-26 20:39:52.989907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.832 [2024-11-26 20:39:52.989917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100184 len:8 PRP1 0x0 PRP2 0x0 00:19:13.832 [2024-11-26 20:39:52.989930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.832 [2024-11-26 20:39:52.989943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.832 [2024-11-26 20:39:52.989952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.832 [2024-11-26 20:39:52.989968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100192 len:8 PRP1 0x0 PRP2 0x0 00:19:13.832 [2024-11-26 20:39:52.989982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.832 [2024-11-26 20:39:52.989995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.832 [2024-11-26 20:39:52.990005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.832 [2024-11-26 20:39:52.990015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100200 len:8 PRP1 0x0 PRP2 0x0 00:19:13.832 [2024-11-26 20:39:52.990028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.832 [2024-11-26 20:39:52.990052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.832 [2024-11-26 20:39:52.990061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.832 [2024-11-26 20:39:52.990071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100208 len:8 PRP1 0x0 PRP2 0x0 00:19:13.832 [2024-11-26 20:39:52.990084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.832 [2024-11-26 20:39:52.990097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.832 [2024-11-26 20:39:52.990107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.832 [2024-11-26 20:39:52.990117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100216 len:8 PRP1 0x0 PRP2 0x0 00:19:13.832 [2024-11-26 20:39:52.990130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.832 [2024-11-26 20:39:52.990143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.832 [2024-11-26 20:39:52.990153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.832 [2024-11-26 20:39:52.990163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100224 len:8 PRP1 0x0 PRP2 0x0 00:19:13.832 [2024-11-26 20:39:52.990176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.832 [2024-11-26 20:39:52.990189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.832 [2024-11-26 20:39:52.990199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.832 [2024-11-26 20:39:52.990209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100232 len:8 PRP1 0x0 PRP2 0x0 00:19:13.832 [2024-11-26 20:39:52.990235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.832 [2024-11-26 20:39:52.990251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.832 [2024-11-26 20:39:52.990261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.832 [2024-11-26 20:39:52.990271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100240 len:8 PRP1 0x0 PRP2 0x0 00:19:13.832 [2024-11-26 20:39:52.990283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.832 [2024-11-26 20:39:52.990297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.832 [2024-11-26 20:39:52.990307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.833 [2024-11-26 20:39:52.990317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100248 len:8 PRP1 0x0 PRP2 0x0 00:19:13.833 [2024-11-26 20:39:52.990330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.833 [2024-11-26 20:39:52.990343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.833 [2024-11-26 20:39:52.990359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.833 [2024-11-26 20:39:52.990370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100256 len:8 PRP1 0x0 PRP2 0x0 00:19:13.833 [2024-11-26 20:39:52.990383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.833 [2024-11-26 20:39:52.990396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.833 [2024-11-26 20:39:52.990405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.833 [2024-11-26 20:39:52.990415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100264 len:8 PRP1 0x0 PRP2 0x0 00:19:13.833 [2024-11-26 20:39:52.990428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.833 [2024-11-26 20:39:52.990451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.833 [2024-11-26 20:39:52.990460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.833 [2024-11-26 20:39:52.990470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100272 len:8 PRP1 0x0 PRP2 0x0 00:19:13.833 [2024-11-26 20:39:52.990483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.833 [2024-11-26 20:39:52.990496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.833 [2024-11-26 20:39:52.990506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.833 [2024-11-26 20:39:52.990515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100280 len:8 PRP1 0x0 PRP2 0x0 00:19:13.833 [2024-11-26 20:39:52.990528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.833 [2024-11-26 20:39:52.990542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.833 [2024-11-26 20:39:52.990561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.833 [2024-11-26 20:39:52.990572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100288 len:8 PRP1 0x0 PRP2 0x0 00:19:13.833 [2024-11-26 20:39:52.990584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.833 [2024-11-26 20:39:52.990598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.833 [2024-11-26 20:39:52.990607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.833 [2024-11-26 20:39:52.990618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100296 len:8 PRP1 0x0 PRP2 0x0 00:19:13.833 [2024-11-26 20:39:52.990636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.833 [2024-11-26 20:39:52.990649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:13.833 [2024-11-26 20:39:52.990659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:13.833 [2024-11-26 20:39:52.990669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100304 len:8 PRP1 0x0 PRP2 0x0 00:19:13.833 [2024-11-26 20:39:52.990682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.833 [2024-11-26 20:39:52.990850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.833 [2024-11-26 20:39:52.990879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.833 [2024-11-26 20:39:52.990895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.833 [2024-11-26 20:39:52.990921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.833 [2024-11-26 20:39:52.990936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.833 [2024-11-26 20:39:52.990950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.833 [2024-11-26 20:39:52.990964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.833 [2024-11-26 20:39:52.990978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.833 [2024-11-26 20:39:52.990993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.833 [2024-11-26 20:39:52.991008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.833 [2024-11-26 20:39:52.991028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff1e0 is same with the state(6) to be set 00:19:13.833 [2024-11-26 20:39:52.992208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:13.833 [2024-11-26 20:39:52.992265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aff1e0 (9): Bad file descriptor 00:19:13.833 [2024-11-26 20:39:52.992661] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.833 [2024-11-26 20:39:52.992694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aff1e0 with addr=10.0.0.3, port=4421 00:19:13.833 [2024-11-26 20:39:52.992712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff1e0 is same with the state(6) to be set 00:19:13.833 [2024-11-26 20:39:52.992773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aff1e0 (9): Bad file descriptor 00:19:13.833 [2024-11-26 20:39:52.992809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:13.833 [2024-11-26 20:39:52.992825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:13.833 [2024-11-26 20:39:52.992839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:13.833 [2024-11-26 20:39:52.992853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:13.833 [2024-11-26 20:39:52.992868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:13.833 6801.17 IOPS, 26.57 MiB/s [2024-11-26T20:40:14.188Z] 6846.81 IOPS, 26.75 MiB/s [2024-11-26T20:40:14.188Z] 6886.42 IOPS, 26.90 MiB/s [2024-11-26T20:40:14.188Z] 6925.64 IOPS, 27.05 MiB/s [2024-11-26T20:40:14.188Z] 6965.60 IOPS, 27.21 MiB/s [2024-11-26T20:40:14.188Z] 7014.93 IOPS, 27.40 MiB/s [2024-11-26T20:40:14.188Z] 7055.52 IOPS, 27.56 MiB/s [2024-11-26T20:40:14.188Z] 7092.84 IOPS, 27.71 MiB/s [2024-11-26T20:40:14.188Z] 7128.64 IOPS, 27.85 MiB/s [2024-11-26T20:40:14.188Z] 7164.00 IOPS, 27.98 MiB/s [2024-11-26T20:40:14.188Z] [2024-11-26 20:40:03.061447] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:13.833 7198.41 IOPS, 28.12 MiB/s [2024-11-26T20:40:14.188Z] 7233.17 IOPS, 28.25 MiB/s [2024-11-26T20:40:14.188Z] 7266.31 IOPS, 28.38 MiB/s [2024-11-26T20:40:14.188Z] 7297.24 IOPS, 28.50 MiB/s [2024-11-26T20:40:14.188Z] 7326.38 IOPS, 28.62 MiB/s [2024-11-26T20:40:14.188Z] 7354.57 IOPS, 28.73 MiB/s [2024-11-26T20:40:14.188Z] 7381.83 IOPS, 28.84 MiB/s [2024-11-26T20:40:14.188Z] 7406.19 IOPS, 28.93 MiB/s [2024-11-26T20:40:14.188Z] 7433.17 IOPS, 29.04 MiB/s [2024-11-26T20:40:14.188Z] 7456.27 IOPS, 29.13 MiB/s [2024-11-26T20:40:14.188Z] Received shutdown signal, test time was about 55.871571 seconds 00:19:13.833 00:19:13.833 Latency(us) 00:19:13.833 [2024-11-26T20:40:14.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.833 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:13.833 Verification LBA range: start 0x0 length 0x4000 00:19:13.833 Nvme0n1 : 55.87 7473.66 29.19 0.00 0.00 17096.74 177.80 7046430.72 00:19:13.833 [2024-11-26T20:40:14.188Z] =================================================================================================================== 00:19:13.833 [2024-11-26T20:40:14.188Z] Total : 7473.66 29.19 0.00 0.00 17096.74 177.80 7046430.72 00:19:13.833 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:13.834 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:13.834 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:13.834 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:19:13.834 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:13.834 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:19:13.834 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:13.834 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:19:13.834 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:13.834 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:13.834 rmmod nvme_tcp 00:19:13.834 rmmod nvme_fabrics 00:19:13.834 rmmod nvme_keyring 00:19:13.834 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:13.834 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:19:13.834 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:19:13.834 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80940 ']' 00:19:13.834 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80940 00:19:13.834 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80940 ']' 00:19:13.834 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80940 00:19:13.834 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:19:13.834 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:13.834 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80940 00:19:13.834 killing process with pid 80940 00:19:13.834 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:13.834 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:13.834 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80940' 00:19:13.834 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80940 00:19:13.834 20:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80940 00:19:13.834 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:13.834 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:13.834 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:13.834 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:19:13.834 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:13.834 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:19:13.834 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:19:13.834 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:13.834 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:13.834 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:13.834 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:13.834 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:13.834 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:13.834 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:13.834 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:13.834 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:14.093 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:14.093 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:14.093 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:14.093 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:14.093 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:14.093 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:14.093 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:14.093 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.093 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:14.093 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.093 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:19:14.093 00:19:14.093 real 1m1.853s 00:19:14.093 user 2m50.632s 00:19:14.093 sys 0m18.996s 00:19:14.093 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.093 ************************************ 00:19:14.093 END TEST nvmf_host_multipath 00:19:14.093 20:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:14.093 ************************************ 00:19:14.093 20:40:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:14.093 20:40:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:14.093 20:40:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:14.093 20:40:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.093 ************************************ 00:19:14.093 START TEST nvmf_timeout 00:19:14.093 ************************************ 00:19:14.093 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:14.093 * Looking for test storage... 00:19:14.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:19:14.353 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:14.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.354 --rc genhtml_branch_coverage=1 00:19:14.354 --rc genhtml_function_coverage=1 00:19:14.354 --rc genhtml_legend=1 00:19:14.354 --rc geninfo_all_blocks=1 00:19:14.354 --rc geninfo_unexecuted_blocks=1 00:19:14.354 00:19:14.354 ' 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:14.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.354 --rc genhtml_branch_coverage=1 00:19:14.354 --rc genhtml_function_coverage=1 00:19:14.354 --rc genhtml_legend=1 00:19:14.354 --rc geninfo_all_blocks=1 00:19:14.354 --rc geninfo_unexecuted_blocks=1 00:19:14.354 00:19:14.354 ' 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:14.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.354 --rc genhtml_branch_coverage=1 00:19:14.354 --rc genhtml_function_coverage=1 00:19:14.354 --rc genhtml_legend=1 00:19:14.354 --rc geninfo_all_blocks=1 00:19:14.354 --rc geninfo_unexecuted_blocks=1 00:19:14.354 00:19:14.354 ' 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:14.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.354 --rc genhtml_branch_coverage=1 00:19:14.354 --rc genhtml_function_coverage=1 00:19:14.354 --rc genhtml_legend=1 00:19:14.354 --rc geninfo_all_blocks=1 00:19:14.354 --rc geninfo_unexecuted_blocks=1 00:19:14.354 00:19:14.354 ' 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:14.354 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:14.354 Cannot find device "nvmf_init_br" 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:14.354 Cannot find device "nvmf_init_br2" 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:19:14.354 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:14.355 Cannot find device "nvmf_tgt_br" 00:19:14.355 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:19:14.355 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:14.355 Cannot find device "nvmf_tgt_br2" 00:19:14.355 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:19:14.355 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:14.355 Cannot find device "nvmf_init_br" 00:19:14.355 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:19:14.355 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:14.355 Cannot find device "nvmf_init_br2" 00:19:14.355 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:19:14.355 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:14.355 Cannot find device "nvmf_tgt_br" 00:19:14.355 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:19:14.355 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:14.355 Cannot find device "nvmf_tgt_br2" 00:19:14.355 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:19:14.355 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:14.355 Cannot find device "nvmf_br" 00:19:14.355 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:19:14.355 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:14.355 Cannot find device "nvmf_init_if" 00:19:14.614 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:19:14.614 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:14.614 Cannot find device "nvmf_init_if2" 00:19:14.614 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:19:14.614 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:14.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:14.614 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:19:14.614 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:14.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:14.614 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:19:14.614 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:14.614 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:14.614 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:14.614 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:14.614 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:14.614 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:14.614 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:14.614 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:14.614 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:14.614 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:14.615 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:14.615 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:19:14.615 00:19:14.615 --- 10.0.0.3 ping statistics --- 00:19:14.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.615 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:14.615 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:14.615 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:19:14.615 00:19:14.615 --- 10.0.0.4 ping statistics --- 00:19:14.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.615 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:14.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:14.615 00:19:14.615 --- 10.0.0.1 ping statistics --- 00:19:14.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.615 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:14.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:19:14.615 00:19:14.615 --- 10.0.0.2 ping statistics --- 00:19:14.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.615 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=82155 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 82155 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82155 ']' 00:19:14.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.615 20:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:14.874 [2024-11-26 20:40:15.022847] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:19:14.874 [2024-11-26 20:40:15.022964] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.874 [2024-11-26 20:40:15.174505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:15.134 [2024-11-26 20:40:15.232954] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.134 [2024-11-26 20:40:15.233013] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.134 [2024-11-26 20:40:15.233024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:15.134 [2024-11-26 20:40:15.233032] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:15.134 [2024-11-26 20:40:15.233044] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.134 [2024-11-26 20:40:15.234241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.134 [2024-11-26 20:40:15.234244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.134 [2024-11-26 20:40:15.288472] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:15.134 20:40:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.134 20:40:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:15.134 20:40:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:15.134 20:40:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:15.134 20:40:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:15.134 20:40:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.134 20:40:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:15.134 20:40:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:15.393 [2024-11-26 20:40:15.682884] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.393 20:40:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:15.962 Malloc0 00:19:15.962 20:40:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:16.220 20:40:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:16.499 20:40:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:16.793 [2024-11-26 20:40:16.906586] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:16.793 20:40:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82202 00:19:16.793 20:40:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82202 /var/tmp/bdevperf.sock 00:19:16.793 20:40:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:16.793 20:40:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82202 ']' 00:19:16.793 20:40:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:16.793 20:40:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:16.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:16.793 20:40:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:16.793 20:40:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:16.793 20:40:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:16.793 [2024-11-26 20:40:16.978924] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:19:16.794 [2024-11-26 20:40:16.979027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82202 ] 00:19:16.794 [2024-11-26 20:40:17.122002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.053 [2024-11-26 20:40:17.171648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.053 [2024-11-26 20:40:17.225574] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:17.053 20:40:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.053 20:40:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:17.053 20:40:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:17.311 20:40:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:17.568 NVMe0n1 00:19:17.568 20:40:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:17.568 20:40:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82218 00:19:17.568 20:40:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:17.826 Running I/O for 10 seconds... 00:19:18.761 20:40:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:19.025 6932.00 IOPS, 27.08 MiB/s [2024-11-26T20:40:19.380Z] [2024-11-26 20:40:19.180688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.025 [2024-11-26 20:40:19.180744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.025 [2024-11-26 20:40:19.180767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.025 [2024-11-26 20:40:19.180779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.025 [2024-11-26 20:40:19.180791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.025 [2024-11-26 20:40:19.180801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.025 [2024-11-26 20:40:19.180813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.025 [2024-11-26 20:40:19.180822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.025 [2024-11-26 20:40:19.180834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.025 [2024-11-26 20:40:19.180843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.025 [2024-11-26 20:40:19.180864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.025 [2024-11-26 20:40:19.180873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.025 [2024-11-26 20:40:19.180885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.025 [2024-11-26 20:40:19.180894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.025 [2024-11-26 20:40:19.180905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.025 [2024-11-26 20:40:19.180914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.025 [2024-11-26 20:40:19.180925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.025 [2024-11-26 20:40:19.180935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.025 [2024-11-26 20:40:19.180946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.025 [2024-11-26 20:40:19.180956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.025 [2024-11-26 20:40:19.180966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.025 [2024-11-26 20:40:19.180976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.025 [2024-11-26 20:40:19.180987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.025 [2024-11-26 20:40:19.180997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.025 [2024-11-26 20:40:19.181008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.025 [2024-11-26 20:40:19.181017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.025 [2024-11-26 20:40:19.181028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.025 [2024-11-26 20:40:19.181038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.025 [2024-11-26 20:40:19.181051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.025 [2024-11-26 20:40:19.181061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.025 [2024-11-26 20:40:19.181072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.025 [2024-11-26 20:40:19.181082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.026 [2024-11-26 20:40:19.181788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.026 [2024-11-26 20:40:19.181800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.026 [2024-11-26 20:40:19.181809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.181821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.181830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.181841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.181850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.181862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.181871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.181882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.181891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.181902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.181911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.181922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.181932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.181943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.181955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.181966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.181975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.181986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.181996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.182007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.182016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.182027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.182036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.182047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.182056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.182067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.182077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.182088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.182098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.182109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.027 [2024-11-26 20:40:19.182118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.182130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.027 [2024-11-26 20:40:19.182139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.182151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.182160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.182171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.182181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.182192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.182201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.182212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.182235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.182248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.182257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.182269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.182278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.182290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.182299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.182310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.027 [2024-11-26 20:40:19.182319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.182330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.182339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.182350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.182360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.182371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.182380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.182391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.182401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.182412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.182421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.182442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.182453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.182465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.182475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.182486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.182496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.182507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.027 [2024-11-26 20:40:19.182516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.027 [2024-11-26 20:40:19.182527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.182536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.182548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.182557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.182568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.182577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.182588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.182598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.182609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.182618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.182630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.182639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.182650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.182659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.182671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.182680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.182691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.182701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.182712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.182721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.182732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.182742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.182753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.182762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.182778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.182787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.182805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.182814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.182826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.182835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.182847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.182856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.182868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.182878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.182890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.182900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.182911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.182920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.182931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.182940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.182952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.182961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.182972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.182981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.182993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.183003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.183014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.183023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.183034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.183044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.183055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.183064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.183075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.183084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.183095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.183104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.183120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.183129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.183145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.183155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.183166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.183176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.028 [2024-11-26 20:40:19.183187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.028 [2024-11-26 20:40:19.183196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.029 [2024-11-26 20:40:19.183207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.029 [2024-11-26 20:40:19.183217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.029 [2024-11-26 20:40:19.183242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.029 [2024-11-26 20:40:19.183251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.029 [2024-11-26 20:40:19.183262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.029 [2024-11-26 20:40:19.183272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.029 [2024-11-26 20:40:19.183283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.029 [2024-11-26 20:40:19.183293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.029 [2024-11-26 20:40:19.183305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.029 [2024-11-26 20:40:19.183315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.029 [2024-11-26 20:40:19.183326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.029 [2024-11-26 20:40:19.183335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.029 [2024-11-26 20:40:19.183346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.029 [2024-11-26 20:40:19.183355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.029 [2024-11-26 20:40:19.183366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.029 [2024-11-26 20:40:19.183375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.029 [2024-11-26 20:40:19.183386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.029 [2024-11-26 20:40:19.183395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.029 [2024-11-26 20:40:19.183407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.029 [2024-11-26 20:40:19.183416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.029 [2024-11-26 20:40:19.183427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.029 [2024-11-26 20:40:19.183436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.029 [2024-11-26 20:40:19.183448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.029 [2024-11-26 20:40:19.183457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.029 [2024-11-26 20:40:19.183473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.029 [2024-11-26 20:40:19.183482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.029 [2024-11-26 20:40:19.183498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.029 [2024-11-26 20:40:19.183508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.029 [2024-11-26 20:40:19.183519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43970 is same with the state(6) to be set 00:19:19.029 [2024-11-26 20:40:19.183550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.029 [2024-11-26 20:40:19.183559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.029 [2024-11-26 20:40:19.183567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65296 len:8 PRP1 0x0 PRP2 0x0 00:19:19.029 [2024-11-26 20:40:19.183577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.029 [2024-11-26 20:40:19.183877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:19.029 [2024-11-26 20:40:19.183962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde3e50 (9): Bad file descriptor 00:19:19.029 [2024-11-26 20:40:19.184062] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:19.029 [2024-11-26 20:40:19.184082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde3e50 with addr=10.0.0.3, port=4420 00:19:19.029 [2024-11-26 20:40:19.184093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde3e50 is same with the state(6) to be set 00:19:19.029 [2024-11-26 20:40:19.184110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde3e50 (9): Bad file descriptor 00:19:19.029 [2024-11-26 20:40:19.184126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:19.029 [2024-11-26 20:40:19.184136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:19.029 [2024-11-26 20:40:19.184146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:19.029 [2024-11-26 20:40:19.184158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:19.029 [2024-11-26 20:40:19.184169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:19.029 20:40:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:20.904 4042.50 IOPS, 15.79 MiB/s [2024-11-26T20:40:21.259Z] 2695.00 IOPS, 10.53 MiB/s [2024-11-26T20:40:21.259Z] [2024-11-26 20:40:21.184428] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:20.904 [2024-11-26 20:40:21.184511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde3e50 with addr=10.0.0.3, port=4420 00:19:20.904 [2024-11-26 20:40:21.184529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde3e50 is same with the state(6) to be set 00:19:20.904 [2024-11-26 20:40:21.184556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde3e50 (9): Bad file descriptor 00:19:20.904 [2024-11-26 20:40:21.184577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:20.904 [2024-11-26 20:40:21.184587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:20.904 [2024-11-26 20:40:21.184598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:20.904 [2024-11-26 20:40:21.184610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:20.904 [2024-11-26 20:40:21.184622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:20.904 20:40:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:20.904 20:40:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:20.904 20:40:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:21.472 20:40:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:21.472 20:40:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:21.472 20:40:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:21.472 20:40:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:21.731 20:40:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:21.731 20:40:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:22.666 2021.25 IOPS, 7.90 MiB/s [2024-11-26T20:40:23.280Z] 1617.00 IOPS, 6.32 MiB/s [2024-11-26T20:40:23.280Z] [2024-11-26 20:40:23.184863] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:22.925 [2024-11-26 20:40:23.184969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde3e50 with addr=10.0.0.3, port=4420 00:19:22.925 [2024-11-26 20:40:23.184986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde3e50 is same with the state(6) to be set 00:19:22.925 [2024-11-26 20:40:23.185015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde3e50 (9): Bad file descriptor 00:19:22.925 [2024-11-26 20:40:23.185048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:22.925 [2024-11-26 20:40:23.185060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:22.925 [2024-11-26 20:40:23.185072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:22.925 [2024-11-26 20:40:23.185083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:22.925 [2024-11-26 20:40:23.185095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:24.843 1347.50 IOPS, 5.26 MiB/s [2024-11-26T20:40:25.198Z] 1155.00 IOPS, 4.51 MiB/s [2024-11-26T20:40:25.198Z] [2024-11-26 20:40:25.185293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:24.843 [2024-11-26 20:40:25.185340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:24.843 [2024-11-26 20:40:25.185353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:24.843 [2024-11-26 20:40:25.185364] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:19:24.843 [2024-11-26 20:40:25.185376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:26.040 1010.62 IOPS, 3.95 MiB/s 00:19:26.040 Latency(us) 00:19:26.040 [2024-11-26T20:40:26.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.040 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:26.040 Verification LBA range: start 0x0 length 0x4000 00:19:26.040 NVMe0n1 : 8.18 988.90 3.86 15.66 0.00 127227.74 4140.68 7015926.69 00:19:26.040 [2024-11-26T20:40:26.395Z] =================================================================================================================== 00:19:26.040 [2024-11-26T20:40:26.395Z] Total : 988.90 3.86 15.66 0.00 127227.74 4140.68 7015926.69 00:19:26.040 { 00:19:26.040 "results": [ 00:19:26.040 { 00:19:26.040 "job": "NVMe0n1", 00:19:26.040 "core_mask": "0x4", 00:19:26.040 "workload": "verify", 00:19:26.040 "status": "finished", 00:19:26.040 "verify_range": { 00:19:26.040 "start": 0, 00:19:26.040 "length": 16384 00:19:26.040 }, 00:19:26.040 "queue_depth": 128, 00:19:26.040 "io_size": 4096, 00:19:26.040 "runtime": 8.175742, 00:19:26.040 "iops": 988.9010685513315, 00:19:26.040 "mibps": 3.8628947990286386, 00:19:26.040 "io_failed": 128, 00:19:26.040 "io_timeout": 0, 00:19:26.040 "avg_latency_us": 127227.74012463611, 00:19:26.040 "min_latency_us": 4140.683636363637, 00:19:26.040 "max_latency_us": 7015926.69090909 00:19:26.040 } 00:19:26.040 ], 00:19:26.040 "core_count": 1 00:19:26.040 } 00:19:26.608 20:40:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:26.608 20:40:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:26.608 20:40:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:26.867 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:26.867 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:26.867 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:26.867 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:27.126 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:27.126 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82218 00:19:27.126 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82202 00:19:27.126 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82202 ']' 00:19:27.126 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82202 00:19:27.126 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:27.126 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.126 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82202 00:19:27.126 killing process with pid 82202 00:19:27.126 Received shutdown signal, test time was about 9.403922 seconds 00:19:27.126 00:19:27.126 Latency(us) 00:19:27.126 [2024-11-26T20:40:27.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.126 [2024-11-26T20:40:27.481Z] =================================================================================================================== 00:19:27.126 [2024-11-26T20:40:27.481Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:27.126 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:27.126 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:27.126 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82202' 00:19:27.126 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82202 00:19:27.126 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82202 00:19:27.384 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:27.642 [2024-11-26 20:40:27.855740] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:27.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.642 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82341 00:19:27.642 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:27.642 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82341 /var/tmp/bdevperf.sock 00:19:27.642 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82341 ']' 00:19:27.642 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.642 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.642 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.642 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.642 20:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:27.642 [2024-11-26 20:40:27.929631] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:19:27.642 [2024-11-26 20:40:27.929933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82341 ] 00:19:27.900 [2024-11-26 20:40:28.077120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.900 [2024-11-26 20:40:28.139842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.900 [2024-11-26 20:40:28.195774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:28.158 20:40:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.158 20:40:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:28.158 20:40:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:28.416 20:40:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:28.677 NVMe0n1 00:19:28.677 20:40:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82357 00:19:28.677 20:40:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:28.677 20:40:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:28.677 Running I/O for 10 seconds... 00:19:29.612 20:40:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:29.877 6676.00 IOPS, 26.08 MiB/s [2024-11-26T20:40:30.232Z] [2024-11-26 20:40:30.145441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.877 [2024-11-26 20:40:30.145501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.145526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.877 [2024-11-26 20:40:30.145538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.145551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.877 [2024-11-26 20:40:30.145561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.145572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.877 [2024-11-26 20:40:30.145582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.145593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.877 [2024-11-26 20:40:30.145603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.145614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.877 [2024-11-26 20:40:30.145623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.145634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.877 [2024-11-26 20:40:30.145644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.145655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.877 [2024-11-26 20:40:30.145665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.145676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.877 [2024-11-26 20:40:30.145685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.145696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.877 [2024-11-26 20:40:30.145706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.145717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.877 [2024-11-26 20:40:30.145726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.145737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.877 [2024-11-26 20:40:30.145746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.145757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.877 [2024-11-26 20:40:30.145766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.145784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.877 [2024-11-26 20:40:30.145793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.145805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.877 [2024-11-26 20:40:30.145829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.145840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.877 [2024-11-26 20:40:30.145849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.145859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.877 [2024-11-26 20:40:30.145868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.145897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.877 [2024-11-26 20:40:30.145907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.145919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.877 [2024-11-26 20:40:30.145928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.145939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.877 [2024-11-26 20:40:30.145949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.145959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.877 [2024-11-26 20:40:30.145969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.145979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.877 [2024-11-26 20:40:30.145989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.146000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.877 [2024-11-26 20:40:30.146009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.146020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.877 [2024-11-26 20:40:30.146030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.146041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.877 [2024-11-26 20:40:30.146050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.146062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:61624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.877 [2024-11-26 20:40:30.146071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.146083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.877 [2024-11-26 20:40:30.146092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.146103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.877 [2024-11-26 20:40:30.146113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.146124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.877 [2024-11-26 20:40:30.146133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.146144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.877 [2024-11-26 20:40:30.146154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.146165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.877 [2024-11-26 20:40:30.146179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.877 [2024-11-26 20:40:30.146190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.877 [2024-11-26 20:40:30.146199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:61680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.878 [2024-11-26 20:40:30.146355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.878 [2024-11-26 20:40:30.146375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.878 [2024-11-26 20:40:30.146533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:61792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.146983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.146994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.878 [2024-11-26 20:40:30.147003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.878 [2024-11-26 20:40:30.147014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.879 [2024-11-26 20:40:30.147830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.879 [2024-11-26 20:40:30.147839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.880 [2024-11-26 20:40:30.147850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.880 [2024-11-26 20:40:30.147859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.880 [2024-11-26 20:40:30.147870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.880 [2024-11-26 20:40:30.147879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.880 [2024-11-26 20:40:30.147894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.880 [2024-11-26 20:40:30.147904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.880 [2024-11-26 20:40:30.147915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.880 [2024-11-26 20:40:30.147924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.880 [2024-11-26 20:40:30.147934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.880 [2024-11-26 20:40:30.147954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.880 [2024-11-26 20:40:30.147965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.880 [2024-11-26 20:40:30.147974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.880 [2024-11-26 20:40:30.147985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.880 [2024-11-26 20:40:30.147993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.880 [2024-11-26 20:40:30.148004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.880 [2024-11-26 20:40:30.148013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.880 [2024-11-26 20:40:30.148024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.880 [2024-11-26 20:40:30.148033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.880 [2024-11-26 20:40:30.148043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.880 [2024-11-26 20:40:30.148053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.880 [2024-11-26 20:40:30.148063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.880 [2024-11-26 20:40:30.148072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.880 [2024-11-26 20:40:30.148083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.880 [2024-11-26 20:40:30.148092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.880 [2024-11-26 20:40:30.148103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.880 [2024-11-26 20:40:30.148113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.880 [2024-11-26 20:40:30.148123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.880 [2024-11-26 20:40:30.148132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.880 [2024-11-26 20:40:30.148143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.880 [2024-11-26 20:40:30.148152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.880 [2024-11-26 20:40:30.148168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.880 [2024-11-26 20:40:30.148177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.880 [2024-11-26 20:40:30.148187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b81970 is same with the state(6) to be set 00:19:29.880 [2024-11-26 20:40:30.148199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:29.880 [2024-11-26 20:40:30.148207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:29.880 [2024-11-26 20:40:30.148215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62416 len:8 PRP1 0x0 PRP2 0x0 00:19:29.880 [2024-11-26 20:40:30.148241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.880 [2024-11-26 20:40:30.148561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:29.880 [2024-11-26 20:40:30.148697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b21e50 (9): Bad file descriptor 00:19:29.880 [2024-11-26 20:40:30.148802] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.880 [2024-11-26 20:40:30.148823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b21e50 with addr=10.0.0.3, port=4420 00:19:29.880 [2024-11-26 20:40:30.148834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21e50 is same with the state(6) to be set 00:19:29.880 [2024-11-26 20:40:30.148852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b21e50 (9): Bad file descriptor 00:19:29.880 [2024-11-26 20:40:30.148868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:29.880 [2024-11-26 20:40:30.148878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:29.880 [2024-11-26 20:40:30.148904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:29.880 [2024-11-26 20:40:30.148914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:29.880 [2024-11-26 20:40:30.148924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:29.880 20:40:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:30.856 3850.50 IOPS, 15.04 MiB/s [2024-11-26T20:40:31.211Z] [2024-11-26 20:40:31.149052] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.856 [2024-11-26 20:40:31.149268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b21e50 with addr=10.0.0.3, port=4420 00:19:30.856 [2024-11-26 20:40:31.149293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21e50 is same with the state(6) to be set 00:19:30.856 [2024-11-26 20:40:31.149337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b21e50 (9): Bad file descriptor 00:19:30.856 [2024-11-26 20:40:31.149357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:30.856 [2024-11-26 20:40:31.149368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:30.856 [2024-11-26 20:40:31.149378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:30.856 [2024-11-26 20:40:31.149390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:30.856 [2024-11-26 20:40:31.149402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:30.856 20:40:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:31.114 [2024-11-26 20:40:31.457037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:31.373 20:40:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82357 00:19:31.940 2567.00 IOPS, 10.03 MiB/s [2024-11-26T20:40:32.295Z] [2024-11-26 20:40:32.166236] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:33.811 1925.25 IOPS, 7.52 MiB/s [2024-11-26T20:40:35.102Z] 2696.40 IOPS, 10.53 MiB/s [2024-11-26T20:40:36.038Z] 3401.50 IOPS, 13.29 MiB/s [2024-11-26T20:40:37.414Z] 3903.00 IOPS, 15.25 MiB/s [2024-11-26T20:40:38.349Z] 4293.12 IOPS, 16.77 MiB/s [2024-11-26T20:40:39.283Z] 4612.67 IOPS, 18.02 MiB/s [2024-11-26T20:40:39.283Z] 4868.20 IOPS, 19.02 MiB/s 00:19:38.928 Latency(us) 00:19:38.928 [2024-11-26T20:40:39.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.928 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:38.928 Verification LBA range: start 0x0 length 0x4000 00:19:38.928 NVMe0n1 : 10.01 4875.86 19.05 0.00 0.00 26201.65 2263.97 3019898.88 00:19:38.928 [2024-11-26T20:40:39.283Z] =================================================================================================================== 00:19:38.928 [2024-11-26T20:40:39.283Z] Total : 4875.86 19.05 0.00 0.00 26201.65 2263.97 3019898.88 00:19:38.928 { 00:19:38.928 "results": [ 00:19:38.928 { 00:19:38.928 "job": "NVMe0n1", 00:19:38.928 "core_mask": "0x4", 00:19:38.928 "workload": "verify", 00:19:38.928 "status": "finished", 00:19:38.928 "verify_range": { 00:19:38.928 "start": 0, 00:19:38.928 "length": 16384 00:19:38.928 }, 00:19:38.928 "queue_depth": 128, 00:19:38.928 "io_size": 4096, 00:19:38.928 "runtime": 10.010534, 00:19:38.928 "iops": 4875.863765109833, 00:19:38.928 "mibps": 19.046342832460287, 00:19:38.928 "io_failed": 0, 00:19:38.928 "io_timeout": 0, 00:19:38.928 "avg_latency_us": 26201.648242871244, 00:19:38.928 "min_latency_us": 2263.970909090909, 00:19:38.928 "max_latency_us": 3019898.88 00:19:38.928 } 00:19:38.928 ], 00:19:38.928 "core_count": 1 00:19:38.928 } 00:19:38.928 20:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82463 00:19:38.928 20:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:38.928 20:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:38.928 Running I/O for 10 seconds... 00:19:39.861 20:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:40.122 7068.00 IOPS, 27.61 MiB/s [2024-11-26T20:40:40.477Z] [2024-11-26 20:40:40.302778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.122 [2024-11-26 20:40:40.302830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.302868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.122 [2024-11-26 20:40:40.302879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.302891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.122 [2024-11-26 20:40:40.302900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.302911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.122 [2024-11-26 20:40:40.302919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.302930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.122 [2024-11-26 20:40:40.302939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.302949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.122 [2024-11-26 20:40:40.302958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.302968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.122 [2024-11-26 20:40:40.302977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.302987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.302996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.122 [2024-11-26 20:40:40.303333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.122 [2024-11-26 20:40:40.303353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.122 [2024-11-26 20:40:40.303541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.122 [2024-11-26 20:40:40.303634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.122 [2024-11-26 20:40:40.303642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.303653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.303662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.303674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.303683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.303694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.303702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.303713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.303722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.303732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.303744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.303755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.303764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.303775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.303785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.303796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.303805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.303815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.303824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.303835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.303844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.303861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.303870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.303881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.303891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.303902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.303912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.303922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.303931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.303942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.303951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.303962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.303971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.303981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.303990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.304001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.304010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.304021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.304029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.304040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.304056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.304067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.304076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.304087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.304095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.304106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.304115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.304126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.304135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.304146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.304156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.304167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.304176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.304187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.304197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.304208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.304217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.304239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.304249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.304260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.304269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.304280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.304290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.304301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.304311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.304322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.304331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.304342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.304351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.304362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.304371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.304382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.304398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.304414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.304423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.304434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.304443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.304454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.123 [2024-11-26 20:40:40.304463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.123 [2024-11-26 20:40:40.304473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.304482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.304493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.304502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.304521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.304530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.304541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.304549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.304561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.304570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.304580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.304589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.304600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.304609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.304620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.304628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.304639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.304649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.304659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.304668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.304679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.304688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.304699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.304707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.304718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.304727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.304738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.304746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.304758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.304767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.304778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.304787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.304798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.304807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.304818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.304827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.304842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.304851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.304862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.304871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.304882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.304890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.304901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.304910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.304920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.304929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.304940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.304957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.304972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.304981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.304992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.305001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.305012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.305021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.305032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.305041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.305052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.305061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.305071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.305080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.305091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.305100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.305111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.305120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.305131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.305140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.305151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.305160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.305176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.305185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.305196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.305205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.305216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.305234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.305246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.305255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.305266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.305275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.124 [2024-11-26 20:40:40.305286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.124 [2024-11-26 20:40:40.305300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.125 [2024-11-26 20:40:40.305321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.125 [2024-11-26 20:40:40.305331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.125 [2024-11-26 20:40:40.305342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:66616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.125 [2024-11-26 20:40:40.305351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.125 [2024-11-26 20:40:40.305361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.125 [2024-11-26 20:40:40.305370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.125 [2024-11-26 20:40:40.305381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.125 [2024-11-26 20:40:40.305390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.125 [2024-11-26 20:40:40.305401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.125 [2024-11-26 20:40:40.305410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.125 [2024-11-26 20:40:40.305420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:66648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.125 [2024-11-26 20:40:40.305429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.125 [2024-11-26 20:40:40.305440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.125 [2024-11-26 20:40:40.305449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.125 [2024-11-26 20:40:40.305460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.125 [2024-11-26 20:40:40.305469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.125 [2024-11-26 20:40:40.305479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.125 [2024-11-26 20:40:40.305489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.125 [2024-11-26 20:40:40.305499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.125 [2024-11-26 20:40:40.305508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.125 [2024-11-26 20:40:40.305523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.125 [2024-11-26 20:40:40.305532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.125 [2024-11-26 20:40:40.305543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7ffd0 is same with the state(6) to be set 00:19:40.125 [2024-11-26 20:40:40.305555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:40.125 [2024-11-26 20:40:40.305562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:40.125 [2024-11-26 20:40:40.305572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66696 len:8 PRP1 0x0 PRP2 0x0 00:19:40.125 [2024-11-26 20:40:40.305581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.125 [2024-11-26 20:40:40.305863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:40.125 [2024-11-26 20:40:40.305938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b21e50 (9): Bad file descriptor 00:19:40.125 [2024-11-26 20:40:40.306049] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.125 [2024-11-26 20:40:40.306070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b21e50 with addr=10.0.0.3, port=4420 00:19:40.125 [2024-11-26 20:40:40.306086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21e50 is same with the state(6) to be set 00:19:40.125 [2024-11-26 20:40:40.306105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b21e50 (9): Bad file descriptor 00:19:40.125 [2024-11-26 20:40:40.306122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:40.125 [2024-11-26 20:40:40.306132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:40.125 [2024-11-26 20:40:40.306143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:40.125 [2024-11-26 20:40:40.306159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:40.125 [2024-11-26 20:40:40.306170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:40.125 20:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:41.058 4110.00 IOPS, 16.05 MiB/s [2024-11-26T20:40:41.413Z] [2024-11-26 20:40:41.306307] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:41.058 [2024-11-26 20:40:41.306376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b21e50 with addr=10.0.0.3, port=4420 00:19:41.058 [2024-11-26 20:40:41.306392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21e50 is same with the state(6) to be set 00:19:41.058 [2024-11-26 20:40:41.306431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b21e50 (9): Bad file descriptor 00:19:41.058 [2024-11-26 20:40:41.306450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:41.058 [2024-11-26 20:40:41.306460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:41.058 [2024-11-26 20:40:41.306472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:41.058 [2024-11-26 20:40:41.306483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:41.058 [2024-11-26 20:40:41.306494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:41.993 2740.00 IOPS, 10.70 MiB/s [2024-11-26T20:40:42.348Z] [2024-11-26 20:40:42.306637] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:41.993 [2024-11-26 20:40:42.306716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b21e50 with addr=10.0.0.3, port=4420 00:19:41.993 [2024-11-26 20:40:42.306733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21e50 is same with the state(6) to be set 00:19:41.993 [2024-11-26 20:40:42.306758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b21e50 (9): Bad file descriptor 00:19:41.993 [2024-11-26 20:40:42.306777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:41.993 [2024-11-26 20:40:42.306789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:41.993 [2024-11-26 20:40:42.306801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:41.993 [2024-11-26 20:40:42.306812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:41.993 [2024-11-26 20:40:42.306824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:43.183 2055.00 IOPS, 8.03 MiB/s [2024-11-26T20:40:43.538Z] [2024-11-26 20:40:43.310561] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:43.183 [2024-11-26 20:40:43.310635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b21e50 with addr=10.0.0.3, port=4420 00:19:43.183 [2024-11-26 20:40:43.310652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21e50 is same with the state(6) to be set 00:19:43.183 [2024-11-26 20:40:43.310903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b21e50 (9): Bad file descriptor 00:19:43.183 [2024-11-26 20:40:43.311149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:43.183 [2024-11-26 20:40:43.311163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:43.183 [2024-11-26 20:40:43.311174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:43.183 [2024-11-26 20:40:43.311185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:43.183 [2024-11-26 20:40:43.311197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:43.183 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:43.441 [2024-11-26 20:40:43.592037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:43.441 20:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82463 00:19:44.006 1644.00 IOPS, 6.42 MiB/s [2024-11-26T20:40:44.361Z] [2024-11-26 20:40:44.341905] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:19:45.873 2636.50 IOPS, 10.30 MiB/s [2024-11-26T20:40:47.161Z] 3614.71 IOPS, 14.12 MiB/s [2024-11-26T20:40:48.535Z] 4358.38 IOPS, 17.02 MiB/s [2024-11-26T20:40:49.468Z] 4910.56 IOPS, 19.18 MiB/s [2024-11-26T20:40:49.468Z] 5333.50 IOPS, 20.83 MiB/s 00:19:49.113 Latency(us) 00:19:49.113 [2024-11-26T20:40:49.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.113 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:49.113 Verification LBA range: start 0x0 length 0x4000 00:19:49.113 NVMe0n1 : 10.01 5341.76 20.87 3687.07 0.00 14146.45 692.60 3019898.88 00:19:49.113 [2024-11-26T20:40:49.468Z] =================================================================================================================== 00:19:49.113 [2024-11-26T20:40:49.468Z] Total : 5341.76 20.87 3687.07 0.00 14146.45 0.00 3019898.88 00:19:49.113 { 00:19:49.113 "results": [ 00:19:49.113 { 00:19:49.113 "job": "NVMe0n1", 00:19:49.113 "core_mask": "0x4", 00:19:49.113 "workload": "verify", 00:19:49.113 "status": "finished", 00:19:49.113 "verify_range": { 00:19:49.113 "start": 0, 00:19:49.113 "length": 16384 00:19:49.113 }, 00:19:49.113 "queue_depth": 128, 00:19:49.113 "io_size": 4096, 00:19:49.113 "runtime": 10.008494, 00:19:49.113 "iops": 5341.762706756881, 00:19:49.113 "mibps": 20.866260573269066, 00:19:49.113 "io_failed": 36902, 00:19:49.113 "io_timeout": 0, 00:19:49.113 "avg_latency_us": 14146.44820128469, 00:19:49.113 "min_latency_us": 692.5963636363637, 00:19:49.113 "max_latency_us": 3019898.88 00:19:49.113 } 00:19:49.113 ], 00:19:49.113 "core_count": 1 00:19:49.113 } 00:19:49.113 20:40:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82341 00:19:49.113 20:40:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82341 ']' 00:19:49.113 20:40:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82341 00:19:49.113 20:40:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:49.113 20:40:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.113 20:40:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82341 00:19:49.113 killing process with pid 82341 00:19:49.113 Received shutdown signal, test time was about 10.000000 seconds 00:19:49.113 00:19:49.113 Latency(us) 00:19:49.113 [2024-11-26T20:40:49.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.113 [2024-11-26T20:40:49.468Z] =================================================================================================================== 00:19:49.113 [2024-11-26T20:40:49.468Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:49.113 20:40:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:49.113 20:40:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:49.113 20:40:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82341' 00:19:49.113 20:40:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82341 00:19:49.113 20:40:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82341 00:19:49.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.113 20:40:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82577 00:19:49.113 20:40:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:49.113 20:40:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82577 /var/tmp/bdevperf.sock 00:19:49.113 20:40:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82577 ']' 00:19:49.113 20:40:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.113 20:40:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.113 20:40:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.113 20:40:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.113 20:40:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:49.371 [2024-11-26 20:40:49.469074] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:19:49.371 [2024-11-26 20:40:49.469451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82577 ] 00:19:49.371 [2024-11-26 20:40:49.620067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.371 [2024-11-26 20:40:49.680249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.629 [2024-11-26 20:40:49.740215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:50.198 20:40:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.198 20:40:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:50.198 20:40:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82592 00:19:50.198 20:40:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82577 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:50.198 20:40:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:50.765 20:40:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:51.025 NVMe0n1 00:19:51.025 20:40:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82635 00:19:51.025 20:40:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:51.025 20:40:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:19:51.283 Running I/O for 10 seconds... 00:19:52.221 20:40:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:52.221 14859.00 IOPS, 58.04 MiB/s [2024-11-26T20:40:52.576Z] [2024-11-26 20:40:52.557766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.557991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.221 [2024-11-26 20:40:52.558727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.558998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.559006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.559013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.559021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.559029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.559037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.559045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.559052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.559060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.559067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.559075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.559084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.559092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.559099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.559107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7cac0 is same with the state(6) to be set 00:19:52.222 [2024-11-26 20:40:52.559169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.222 [2024-11-26 20:40:52.559197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.222 [2024-11-26 20:40:52.559217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:73728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.222 [2024-11-26 20:40:52.559227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.222 [2024-11-26 20:40:52.559238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.222 [2024-11-26 20:40:52.559247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.222 [2024-11-26 20:40:52.559257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.222 [2024-11-26 20:40:52.559279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.222 [2024-11-26 20:40:52.559291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.222 [2024-11-26 20:40:52.559300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.222 [2024-11-26 20:40:52.559310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:68384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.222 [2024-11-26 20:40:52.559319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.222 [2024-11-26 20:40:52.559331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.222 [2024-11-26 20:40:52.559340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.222 [2024-11-26 20:40:52.559351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.222 [2024-11-26 20:40:52.559359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.222 [2024-11-26 20:40:52.559370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.222 [2024-11-26 20:40:52.559378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.222 [2024-11-26 20:40:52.559388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.222 [2024-11-26 20:40:52.559396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.222 [2024-11-26 20:40:52.559407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.222 [2024-11-26 20:40:52.559416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.222 [2024-11-26 20:40:52.559442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.222 [2024-11-26 20:40:52.559450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.222 [2024-11-26 20:40:52.559461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.222 [2024-11-26 20:40:52.559470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.222 [2024-11-26 20:40:52.559480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.222 [2024-11-26 20:40:52.559488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.222 [2024-11-26 20:40:52.559499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.222 [2024-11-26 20:40:52.559507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.222 [2024-11-26 20:40:52.559544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.222 [2024-11-26 20:40:52.559554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.559566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.559577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.559589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.559598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.559616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.559627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.559638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.559647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.559659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.559668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.559679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.559688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.559699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.559708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.559718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.559728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.559739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.559748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.559759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.559768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.559779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.559788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.559799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.559808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.559819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.559828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.559839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.559863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.559888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.559897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.559908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.559916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.559927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.559936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.559947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.559956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.559967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.559976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.559986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.560009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.560020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.560034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.560043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.560051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.560062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.560070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.560080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:114928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.560088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.560099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.560107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.560117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.560125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.560135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.560143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.560153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.560162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.560173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:118856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.223 [2024-11-26 20:40:52.560181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.223 [2024-11-26 20:40:52.560191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:115536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:115592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:116248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:116728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:29760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:27368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.224 [2024-11-26 20:40:52.560727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.224 [2024-11-26 20:40:52.560737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.560745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.560755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.560763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.560773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.560781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.560791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.560799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.560809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.560821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.560831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.560840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.560850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.560860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.560870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.560879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.560890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.560898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.560908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.560917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.560927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:56552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.560935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.560945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.560953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.560963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.560971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.560981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.560989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.560999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.561007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.561017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.561036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.561046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.561055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.561065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.561073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.561083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:111384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.561091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.561102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.561110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.561120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.561134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.561144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.561153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.561163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.561172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.561182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:36320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.561190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.561200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.561208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.561218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.561251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.561265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.561274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.561284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.561292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.225 [2024-11-26 20:40:52.561302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:57184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.225 [2024-11-26 20:40:52.561313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.226 [2024-11-26 20:40:52.561323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.226 [2024-11-26 20:40:52.561331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.226 [2024-11-26 20:40:52.561348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.226 [2024-11-26 20:40:52.561356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.226 [2024-11-26 20:40:52.561366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.226 [2024-11-26 20:40:52.561390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.226 [2024-11-26 20:40:52.561400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.226 [2024-11-26 20:40:52.561409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.226 [2024-11-26 20:40:52.561419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:55944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.226 [2024-11-26 20:40:52.561427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.226 [2024-11-26 20:40:52.561438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:68344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.226 [2024-11-26 20:40:52.561446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.226 [2024-11-26 20:40:52.561457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.226 [2024-11-26 20:40:52.561465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.226 [2024-11-26 20:40:52.561476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.226 [2024-11-26 20:40:52.561489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.226 [2024-11-26 20:40:52.561499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.226 [2024-11-26 20:40:52.561508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.226 [2024-11-26 20:40:52.561518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.226 [2024-11-26 20:40:52.561527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.226 [2024-11-26 20:40:52.561538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:119192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.226 [2024-11-26 20:40:52.561547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.226 [2024-11-26 20:40:52.561557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.226 [2024-11-26 20:40:52.561566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.226 [2024-11-26 20:40:52.561576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.226 [2024-11-26 20:40:52.561585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.226 [2024-11-26 20:40:52.561595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.226 [2024-11-26 20:40:52.561604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.226 [2024-11-26 20:40:52.561615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.226 [2024-11-26 20:40:52.561623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.226 [2024-11-26 20:40:52.561634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.226 [2024-11-26 20:40:52.561657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.226 [2024-11-26 20:40:52.561667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.226 [2024-11-26 20:40:52.561675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.226 [2024-11-26 20:40:52.561685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.226 [2024-11-26 20:40:52.561694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.226 [2024-11-26 20:40:52.561704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:32032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.226 [2024-11-26 20:40:52.561712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.226 [2024-11-26 20:40:52.561722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.226 [2024-11-26 20:40:52.561731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.226 [2024-11-26 20:40:52.561741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.226 [2024-11-26 20:40:52.561749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.226 [2024-11-26 20:40:52.561759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.226 [2024-11-26 20:40:52.561767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.226 [2024-11-26 20:40:52.561777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:68992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.227 [2024-11-26 20:40:52.561785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.227 [2024-11-26 20:40:52.561796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.227 [2024-11-26 20:40:52.561808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.227 [2024-11-26 20:40:52.561818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.227 [2024-11-26 20:40:52.561826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.227 [2024-11-26 20:40:52.561836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1240e20 is same with the state(6) to be set 00:19:52.227 [2024-11-26 20:40:52.561847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.227 [2024-11-26 20:40:52.561854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.227 [2024-11-26 20:40:52.561861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100248 len:8 PRP1 0x0 PRP2 0x0 00:19:52.227 [2024-11-26 20:40:52.561869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.227 [2024-11-26 20:40:52.562181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:52.227 [2024-11-26 20:40:52.562637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d3e50 (9): Bad file descriptor 00:19:52.227 [2024-11-26 20:40:52.562902] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.227 [2024-11-26 20:40:52.563026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d3e50 with addr=10.0.0.3, port=4420 00:19:52.227 [2024-11-26 20:40:52.563202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d3e50 is same with the state(6) to be set 00:19:52.227 [2024-11-26 20:40:52.563374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d3e50 (9): Bad file descriptor 00:19:52.227 [2024-11-26 20:40:52.563561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:52.227 [2024-11-26 20:40:52.563690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:52.227 [2024-11-26 20:40:52.563822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:52.227 [2024-11-26 20:40:52.563991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:52.227 [2024-11-26 20:40:52.564127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:52.485 20:40:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82635 00:19:54.442 8637.00 IOPS, 33.74 MiB/s [2024-11-26T20:40:54.797Z] 5758.00 IOPS, 22.49 MiB/s [2024-11-26T20:40:54.797Z] [2024-11-26 20:40:54.564498] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:54.442 [2024-11-26 20:40:54.564735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d3e50 with addr=10.0.0.3, port=4420 00:19:54.442 [2024-11-26 20:40:54.564907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d3e50 is same with the state(6) to be set 00:19:54.442 [2024-11-26 20:40:54.564943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d3e50 (9): Bad file descriptor 00:19:54.442 [2024-11-26 20:40:54.564978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:54.442 [2024-11-26 20:40:54.564988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:54.442 [2024-11-26 20:40:54.565001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:54.442 [2024-11-26 20:40:54.565013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:54.442 [2024-11-26 20:40:54.565024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:56.309 4318.50 IOPS, 16.87 MiB/s [2024-11-26T20:40:56.664Z] 3454.80 IOPS, 13.50 MiB/s [2024-11-26T20:40:56.664Z] [2024-11-26 20:40:56.565213] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.309 [2024-11-26 20:40:56.565297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d3e50 with addr=10.0.0.3, port=4420 00:19:56.309 [2024-11-26 20:40:56.565316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d3e50 is same with the state(6) to be set 00:19:56.309 [2024-11-26 20:40:56.565343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d3e50 (9): Bad file descriptor 00:19:56.309 [2024-11-26 20:40:56.565363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:56.309 [2024-11-26 20:40:56.565372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:56.309 [2024-11-26 20:40:56.565383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:56.309 [2024-11-26 20:40:56.565394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:56.309 [2024-11-26 20:40:56.565407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:58.233 2879.00 IOPS, 11.25 MiB/s [2024-11-26T20:40:58.588Z] 2467.71 IOPS, 9.64 MiB/s [2024-11-26T20:40:58.588Z] [2024-11-26 20:40:58.565503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:58.233 [2024-11-26 20:40:58.565568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:58.233 [2024-11-26 20:40:58.565581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:58.233 [2024-11-26 20:40:58.565592] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:19:58.233 [2024-11-26 20:40:58.565604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:59.447 2159.25 IOPS, 8.43 MiB/s 00:19:59.447 Latency(us) 00:19:59.447 [2024-11-26T20:40:59.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.447 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:59.447 NVMe0n1 : 8.16 2115.69 8.26 15.68 0.00 59953.01 7804.74 7015926.69 00:19:59.447 [2024-11-26T20:40:59.802Z] =================================================================================================================== 00:19:59.447 [2024-11-26T20:40:59.802Z] Total : 2115.69 8.26 15.68 0.00 59953.01 7804.74 7015926.69 00:19:59.447 { 00:19:59.447 "results": [ 00:19:59.447 { 00:19:59.447 "job": "NVMe0n1", 00:19:59.447 "core_mask": "0x4", 00:19:59.447 "workload": "randread", 00:19:59.447 "status": "finished", 00:19:59.447 "queue_depth": 128, 00:19:59.447 "io_size": 4096, 00:19:59.447 "runtime": 8.164697, 00:19:59.447 "iops": 2115.6939443068127, 00:19:59.447 "mibps": 8.264429469948487, 00:19:59.447 "io_failed": 128, 00:19:59.447 "io_timeout": 0, 00:19:59.447 "avg_latency_us": 59953.01047528497, 00:19:59.447 "min_latency_us": 7804.741818181818, 00:19:59.447 "max_latency_us": 7015926.69090909 00:19:59.447 } 00:19:59.447 ], 00:19:59.447 "core_count": 1 00:19:59.447 } 00:19:59.447 20:40:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:59.447 Attaching 5 probes... 00:19:59.447 1473.340126: reset bdev controller NVMe0 00:19:59.447 1473.986433: reconnect bdev controller NVMe0 00:19:59.447 3475.521375: reconnect delay bdev controller NVMe0 00:19:59.447 3475.544321: reconnect bdev controller NVMe0 00:19:59.447 5476.239672: reconnect delay bdev controller NVMe0 00:19:59.447 5476.260170: reconnect bdev controller NVMe0 00:19:59.447 7476.639627: reconnect delay bdev controller NVMe0 00:19:59.447 7476.665475: reconnect bdev controller NVMe0 00:19:59.447 20:40:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:59.447 20:40:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:59.447 20:40:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82592 00:19:59.447 20:40:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:59.447 20:40:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82577 00:19:59.447 20:40:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82577 ']' 00:19:59.447 20:40:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82577 00:19:59.447 20:40:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:59.447 20:40:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.447 20:40:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82577 00:19:59.447 killing process with pid 82577 00:19:59.447 Received shutdown signal, test time was about 8.248841 seconds 00:19:59.447 00:19:59.447 Latency(us) 00:19:59.447 [2024-11-26T20:40:59.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.447 [2024-11-26T20:40:59.802Z] =================================================================================================================== 00:19:59.447 [2024-11-26T20:40:59.802Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.447 20:40:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:59.447 20:40:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:59.447 20:40:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82577' 00:19:59.447 20:40:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82577 00:19:59.447 20:40:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82577 00:19:59.706 20:40:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:59.965 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:59.965 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:19:59.965 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:59.965 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:19:59.965 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:59.965 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:19:59.965 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:59.965 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:59.965 rmmod nvme_tcp 00:19:59.965 rmmod nvme_fabrics 00:19:59.965 rmmod nvme_keyring 00:19:59.965 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:59.965 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:19:59.965 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:19:59.965 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 82155 ']' 00:19:59.965 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 82155 00:19:59.965 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82155 ']' 00:19:59.965 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82155 00:19:59.965 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:59.965 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.965 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82155 00:19:59.965 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:59.965 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:59.965 killing process with pid 82155 00:19:59.965 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82155' 00:19:59.965 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82155 00:19:59.965 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82155 00:20:00.224 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:00.224 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:00.224 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:00.224 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:20:00.224 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:00.224 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:20:00.224 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:20:00.224 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:00.224 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:00.224 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:00.224 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:00.224 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:00.224 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:00.482 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:00.482 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:00.482 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:00.482 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:00.482 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:00.482 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:00.483 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:00.483 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:00.483 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:00.483 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:00.483 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.483 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.483 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.483 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:20:00.483 ************************************ 00:20:00.483 END TEST nvmf_timeout 00:20:00.483 ************************************ 00:20:00.483 00:20:00.483 real 0m46.375s 00:20:00.483 user 2m16.195s 00:20:00.483 sys 0m5.819s 00:20:00.483 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.483 20:41:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:00.483 20:41:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:20:00.483 20:41:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:00.483 00:20:00.483 real 5m8.422s 00:20:00.483 user 13m24.161s 00:20:00.483 sys 1m10.010s 00:20:00.483 20:41:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.483 20:41:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.483 ************************************ 00:20:00.483 END TEST nvmf_host 00:20:00.483 ************************************ 00:20:00.483 20:41:00 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:20:00.483 20:41:00 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:20:00.483 00:20:00.483 real 13m4.213s 00:20:00.483 user 31m26.508s 00:20:00.483 sys 3m12.334s 00:20:00.483 20:41:00 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.483 20:41:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:00.483 ************************************ 00:20:00.483 END TEST nvmf_tcp 00:20:00.483 ************************************ 00:20:00.742 20:41:00 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:20:00.742 20:41:00 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:00.742 20:41:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:00.742 20:41:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:00.742 20:41:00 -- common/autotest_common.sh@10 -- # set +x 00:20:00.742 ************************************ 00:20:00.742 START TEST nvmf_dif 00:20:00.742 ************************************ 00:20:00.742 20:41:00 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:00.742 * Looking for test storage... 00:20:00.742 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:00.742 20:41:00 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:00.742 20:41:00 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:20:00.742 20:41:00 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:00.742 20:41:01 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:20:00.742 20:41:01 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:00.742 20:41:01 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:00.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.742 --rc genhtml_branch_coverage=1 00:20:00.742 --rc genhtml_function_coverage=1 00:20:00.742 --rc genhtml_legend=1 00:20:00.742 --rc geninfo_all_blocks=1 00:20:00.742 --rc geninfo_unexecuted_blocks=1 00:20:00.742 00:20:00.742 ' 00:20:00.742 20:41:01 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:00.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.742 --rc genhtml_branch_coverage=1 00:20:00.742 --rc genhtml_function_coverage=1 00:20:00.742 --rc genhtml_legend=1 00:20:00.742 --rc geninfo_all_blocks=1 00:20:00.742 --rc geninfo_unexecuted_blocks=1 00:20:00.742 00:20:00.742 ' 00:20:00.742 20:41:01 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:00.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.742 --rc genhtml_branch_coverage=1 00:20:00.742 --rc genhtml_function_coverage=1 00:20:00.742 --rc genhtml_legend=1 00:20:00.742 --rc geninfo_all_blocks=1 00:20:00.742 --rc geninfo_unexecuted_blocks=1 00:20:00.742 00:20:00.742 ' 00:20:00.742 20:41:01 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:00.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.742 --rc genhtml_branch_coverage=1 00:20:00.742 --rc genhtml_function_coverage=1 00:20:00.742 --rc genhtml_legend=1 00:20:00.742 --rc geninfo_all_blocks=1 00:20:00.742 --rc geninfo_unexecuted_blocks=1 00:20:00.742 00:20:00.742 ' 00:20:00.742 20:41:01 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:00.742 20:41:01 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:20:00.742 20:41:01 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.742 20:41:01 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.742 20:41:01 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.742 20:41:01 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.742 20:41:01 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.742 20:41:01 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.742 20:41:01 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.742 20:41:01 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.742 20:41:01 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.742 20:41:01 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.742 20:41:01 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:20:00.742 20:41:01 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:20:00.742 20:41:01 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.742 20:41:01 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.742 20:41:01 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:00.742 20:41:01 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.742 20:41:01 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:00.742 20:41:01 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:20:01.002 20:41:01 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.002 20:41:01 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.002 20:41:01 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.002 20:41:01 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.002 20:41:01 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.002 20:41:01 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.002 20:41:01 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:20:01.002 20:41:01 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:01.002 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:01.002 20:41:01 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:20:01.002 20:41:01 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:01.002 20:41:01 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:01.002 20:41:01 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:20:01.002 20:41:01 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.002 20:41:01 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:01.002 20:41:01 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:01.002 Cannot find device "nvmf_init_br" 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@162 -- # true 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:01.002 Cannot find device "nvmf_init_br2" 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@163 -- # true 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:01.002 Cannot find device "nvmf_tgt_br" 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@164 -- # true 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:01.002 Cannot find device "nvmf_tgt_br2" 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@165 -- # true 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:01.002 Cannot find device "nvmf_init_br" 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@166 -- # true 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:01.002 Cannot find device "nvmf_init_br2" 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@167 -- # true 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:01.002 Cannot find device "nvmf_tgt_br" 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@168 -- # true 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:01.002 Cannot find device "nvmf_tgt_br2" 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@169 -- # true 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:01.002 Cannot find device "nvmf_br" 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@170 -- # true 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:01.002 Cannot find device "nvmf_init_if" 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@171 -- # true 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:01.002 Cannot find device "nvmf_init_if2" 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@172 -- # true 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:01.002 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@173 -- # true 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:01.002 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@174 -- # true 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:01.002 20:41:01 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:01.003 20:41:01 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:01.003 20:41:01 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:01.003 20:41:01 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:01.003 20:41:01 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:01.003 20:41:01 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:01.003 20:41:01 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:01.003 20:41:01 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:01.003 20:41:01 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:01.003 20:41:01 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:01.003 20:41:01 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:01.003 20:41:01 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:01.003 20:41:01 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:01.003 20:41:01 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:01.262 20:41:01 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:01.262 20:41:01 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:01.262 20:41:01 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:01.262 20:41:01 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:01.262 20:41:01 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:01.262 20:41:01 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:01.262 20:41:01 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:01.262 20:41:01 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:01.262 20:41:01 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:01.262 20:41:01 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:01.262 20:41:01 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:01.262 20:41:01 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:01.262 20:41:01 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:01.262 20:41:01 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:01.262 20:41:01 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:01.262 20:41:01 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:01.262 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:01.262 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:20:01.262 00:20:01.262 --- 10.0.0.3 ping statistics --- 00:20:01.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.262 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:20:01.262 20:41:01 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:01.262 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:01.262 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:20:01.262 00:20:01.262 --- 10.0.0.4 ping statistics --- 00:20:01.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.262 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:01.262 20:41:01 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:01.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:01.262 00:20:01.262 --- 10.0.0.1 ping statistics --- 00:20:01.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.262 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:01.262 20:41:01 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:01.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:01.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:20:01.262 00:20:01.262 --- 10.0.0.2 ping statistics --- 00:20:01.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.262 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:01.262 20:41:01 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.262 20:41:01 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:20:01.262 20:41:01 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:20:01.262 20:41:01 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:01.520 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:01.520 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:01.520 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:01.520 20:41:01 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.520 20:41:01 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:01.520 20:41:01 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:01.520 20:41:01 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.520 20:41:01 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:01.520 20:41:01 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:01.520 20:41:01 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:01.520 20:41:01 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:20:01.520 20:41:01 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:01.520 20:41:01 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:01.520 20:41:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:01.778 20:41:01 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=83136 00:20:01.778 20:41:01 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 83136 00:20:01.778 20:41:01 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:01.778 20:41:01 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 83136 ']' 00:20:01.778 20:41:01 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.778 20:41:01 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.778 20:41:01 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.778 20:41:01 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.778 20:41:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:01.778 [2024-11-26 20:41:01.933021] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:20:01.778 [2024-11-26 20:41:01.933135] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.778 [2024-11-26 20:41:02.082611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.036 [2024-11-26 20:41:02.155358] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.036 [2024-11-26 20:41:02.155429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.036 [2024-11-26 20:41:02.155447] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.036 [2024-11-26 20:41:02.155461] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.037 [2024-11-26 20:41:02.155474] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.037 [2024-11-26 20:41:02.156008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.037 [2024-11-26 20:41:02.213943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:02.037 20:41:02 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.037 20:41:02 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:20:02.037 20:41:02 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:02.037 20:41:02 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:02.037 20:41:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:02.037 20:41:02 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.037 20:41:02 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:02.037 20:41:02 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:02.037 20:41:02 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.037 20:41:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:02.037 [2024-11-26 20:41:02.316632] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.037 20:41:02 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.037 20:41:02 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:02.037 20:41:02 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:02.037 20:41:02 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:02.037 20:41:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:02.037 ************************************ 00:20:02.037 START TEST fio_dif_1_default 00:20:02.037 ************************************ 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:02.037 bdev_null0 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:02.037 [2024-11-26 20:41:02.360756] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:02.037 { 00:20:02.037 "params": { 00:20:02.037 "name": "Nvme$subsystem", 00:20:02.037 "trtype": "$TEST_TRANSPORT", 00:20:02.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.037 "adrfam": "ipv4", 00:20:02.037 "trsvcid": "$NVMF_PORT", 00:20:02.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.037 "hdgst": ${hdgst:-false}, 00:20:02.037 "ddgst": ${ddgst:-false} 00:20:02.037 }, 00:20:02.037 "method": "bdev_nvme_attach_controller" 00:20:02.037 } 00:20:02.037 EOF 00:20:02.037 )") 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:20:02.037 20:41:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:02.037 "params": { 00:20:02.037 "name": "Nvme0", 00:20:02.037 "trtype": "tcp", 00:20:02.037 "traddr": "10.0.0.3", 00:20:02.037 "adrfam": "ipv4", 00:20:02.037 "trsvcid": "4420", 00:20:02.037 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:02.037 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:02.037 "hdgst": false, 00:20:02.037 "ddgst": false 00:20:02.037 }, 00:20:02.037 "method": "bdev_nvme_attach_controller" 00:20:02.037 }' 00:20:02.295 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:02.295 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:02.295 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:02.295 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:02.295 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:02.295 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:02.295 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:02.295 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:02.295 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:02.295 20:41:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:02.295 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:02.295 fio-3.35 00:20:02.295 Starting 1 thread 00:20:14.499 00:20:14.499 filename0: (groupid=0, jobs=1): err= 0: pid=83195: Tue Nov 26 20:41:13 2024 00:20:14.499 read: IOPS=9048, BW=35.3MiB/s (37.1MB/s)(354MiB/10001msec) 00:20:14.499 slat (nsec): min=5889, max=53948, avg=8411.11, stdev=3839.46 00:20:14.499 clat (usec): min=313, max=2621, avg=417.41, stdev=48.88 00:20:14.499 lat (usec): min=319, max=2652, avg=425.82, stdev=49.75 00:20:14.499 clat percentiles (usec): 00:20:14.499 | 1.00th=[ 338], 5.00th=[ 351], 10.00th=[ 363], 20.00th=[ 375], 00:20:14.499 | 30.00th=[ 388], 40.00th=[ 400], 50.00th=[ 412], 60.00th=[ 424], 00:20:14.499 | 70.00th=[ 441], 80.00th=[ 457], 90.00th=[ 478], 95.00th=[ 498], 00:20:14.499 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 578], 99.95th=[ 594], 00:20:14.499 | 99.99th=[ 1598] 00:20:14.499 bw ( KiB/s): min=32288, max=38112, per=99.88%, avg=36150.74, stdev=1429.45, samples=19 00:20:14.499 iops : min= 8072, max= 9528, avg=9037.89, stdev=357.40, samples=19 00:20:14.499 lat (usec) : 500=95.57%, 750=4.42% 00:20:14.499 lat (msec) : 2=0.01%, 4=0.01% 00:20:14.499 cpu : usr=84.58%, sys=13.58%, ctx=14, majf=0, minf=9 00:20:14.499 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:14.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.499 issued rwts: total=90496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:14.499 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:14.499 00:20:14.499 Run status group 0 (all jobs): 00:20:14.499 READ: bw=35.3MiB/s (37.1MB/s), 35.3MiB/s-35.3MiB/s (37.1MB/s-37.1MB/s), io=354MiB (371MB), run=10001-10001msec 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.499 00:20:14.499 real 0m11.081s 00:20:14.499 user 0m9.156s 00:20:14.499 sys 0m1.654s 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:14.499 ************************************ 00:20:14.499 END TEST fio_dif_1_default 00:20:14.499 ************************************ 00:20:14.499 20:41:13 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:14.499 20:41:13 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:14.499 20:41:13 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:14.499 20:41:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:14.499 ************************************ 00:20:14.499 START TEST fio_dif_1_multi_subsystems 00:20:14.499 ************************************ 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:14.499 bdev_null0 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:14.499 [2024-11-26 20:41:13.494037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:14.499 bdev_null1 00:20:14.499 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:14.500 { 00:20:14.500 "params": { 00:20:14.500 "name": "Nvme$subsystem", 00:20:14.500 "trtype": "$TEST_TRANSPORT", 00:20:14.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:14.500 "adrfam": "ipv4", 00:20:14.500 "trsvcid": "$NVMF_PORT", 00:20:14.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:14.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:14.500 "hdgst": ${hdgst:-false}, 00:20:14.500 "ddgst": ${ddgst:-false} 00:20:14.500 }, 00:20:14.500 "method": "bdev_nvme_attach_controller" 00:20:14.500 } 00:20:14.500 EOF 00:20:14.500 )") 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:14.500 { 00:20:14.500 "params": { 00:20:14.500 "name": "Nvme$subsystem", 00:20:14.500 "trtype": "$TEST_TRANSPORT", 00:20:14.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:14.500 "adrfam": "ipv4", 00:20:14.500 "trsvcid": "$NVMF_PORT", 00:20:14.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:14.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:14.500 "hdgst": ${hdgst:-false}, 00:20:14.500 "ddgst": ${ddgst:-false} 00:20:14.500 }, 00:20:14.500 "method": "bdev_nvme_attach_controller" 00:20:14.500 } 00:20:14.500 EOF 00:20:14.500 )") 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:14.500 "params": { 00:20:14.500 "name": "Nvme0", 00:20:14.500 "trtype": "tcp", 00:20:14.500 "traddr": "10.0.0.3", 00:20:14.500 "adrfam": "ipv4", 00:20:14.500 "trsvcid": "4420", 00:20:14.500 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:14.500 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:14.500 "hdgst": false, 00:20:14.500 "ddgst": false 00:20:14.500 }, 00:20:14.500 "method": "bdev_nvme_attach_controller" 00:20:14.500 },{ 00:20:14.500 "params": { 00:20:14.500 "name": "Nvme1", 00:20:14.500 "trtype": "tcp", 00:20:14.500 "traddr": "10.0.0.3", 00:20:14.500 "adrfam": "ipv4", 00:20:14.500 "trsvcid": "4420", 00:20:14.500 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.500 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:14.500 "hdgst": false, 00:20:14.500 "ddgst": false 00:20:14.500 }, 00:20:14.500 "method": "bdev_nvme_attach_controller" 00:20:14.500 }' 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:14.500 20:41:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:14.500 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:14.500 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:14.500 fio-3.35 00:20:14.500 Starting 2 threads 00:20:24.512 00:20:24.512 filename0: (groupid=0, jobs=1): err= 0: pid=83357: Tue Nov 26 20:41:24 2024 00:20:24.512 read: IOPS=5043, BW=19.7MiB/s (20.7MB/s)(197MiB/10001msec) 00:20:24.512 slat (usec): min=5, max=104, avg=12.70, stdev= 4.63 00:20:24.512 clat (usec): min=443, max=1281, avg=758.70, stdev=66.21 00:20:24.512 lat (usec): min=450, max=1292, avg=771.40, stdev=67.32 00:20:24.512 clat percentiles (usec): 00:20:24.512 | 1.00th=[ 627], 5.00th=[ 660], 10.00th=[ 685], 20.00th=[ 701], 00:20:24.512 | 30.00th=[ 717], 40.00th=[ 734], 50.00th=[ 750], 60.00th=[ 766], 00:20:24.512 | 70.00th=[ 791], 80.00th=[ 816], 90.00th=[ 848], 95.00th=[ 881], 00:20:24.512 | 99.00th=[ 930], 99.50th=[ 947], 99.90th=[ 988], 99.95th=[ 1012], 00:20:24.512 | 99.99th=[ 1045] 00:20:24.512 bw ( KiB/s): min=19456, max=20704, per=50.02%, avg=20181.89, stdev=322.66, samples=19 00:20:24.512 iops : min= 4864, max= 5176, avg=5045.47, stdev=80.67, samples=19 00:20:24.512 lat (usec) : 500=0.02%, 750=49.74%, 1000=50.18% 00:20:24.512 lat (msec) : 2=0.06% 00:20:24.512 cpu : usr=90.32%, sys=8.38%, ctx=10, majf=0, minf=0 00:20:24.512 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:24.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.512 issued rwts: total=50444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.512 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:24.512 filename1: (groupid=0, jobs=1): err= 0: pid=83358: Tue Nov 26 20:41:24 2024 00:20:24.512 read: IOPS=5043, BW=19.7MiB/s (20.7MB/s)(197MiB/10001msec) 00:20:24.512 slat (nsec): min=6470, max=87003, avg=12872.14, stdev=4723.63 00:20:24.512 clat (usec): min=408, max=1340, avg=757.82, stdev=61.36 00:20:24.512 lat (usec): min=415, max=1365, avg=770.69, stdev=61.98 00:20:24.512 clat percentiles (usec): 00:20:24.512 | 1.00th=[ 652], 5.00th=[ 676], 10.00th=[ 685], 20.00th=[ 701], 00:20:24.512 | 30.00th=[ 717], 40.00th=[ 734], 50.00th=[ 750], 60.00th=[ 766], 00:20:24.512 | 70.00th=[ 783], 80.00th=[ 807], 90.00th=[ 840], 95.00th=[ 873], 00:20:24.512 | 99.00th=[ 922], 99.50th=[ 938], 99.90th=[ 979], 99.95th=[ 1012], 00:20:24.512 | 99.99th=[ 1254] 00:20:24.512 bw ( KiB/s): min=19456, max=20704, per=50.01%, avg=20178.53, stdev=319.17, samples=19 00:20:24.512 iops : min= 4864, max= 5176, avg=5044.63, stdev=79.79, samples=19 00:20:24.512 lat (usec) : 500=0.02%, 750=51.29%, 1000=48.63% 00:20:24.512 lat (msec) : 2=0.06% 00:20:24.512 cpu : usr=90.24%, sys=8.46%, ctx=166, majf=0, minf=0 00:20:24.512 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:24.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.512 issued rwts: total=50440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.512 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:24.512 00:20:24.512 Run status group 0 (all jobs): 00:20:24.512 READ: bw=39.4MiB/s (41.3MB/s), 19.7MiB/s-19.7MiB/s (20.7MB/s-20.7MB/s), io=394MiB (413MB), run=10001-10001msec 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.512 00:20:24.512 real 0m11.189s 00:20:24.512 user 0m18.843s 00:20:24.512 sys 0m1.988s 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:24.512 20:41:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:24.512 ************************************ 00:20:24.512 END TEST fio_dif_1_multi_subsystems 00:20:24.512 ************************************ 00:20:24.512 20:41:24 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:24.512 20:41:24 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:24.512 20:41:24 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:24.512 20:41:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:24.512 ************************************ 00:20:24.512 START TEST fio_dif_rand_params 00:20:24.512 ************************************ 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:24.512 bdev_null0 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:24.512 [2024-11-26 20:41:24.735175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:24.512 20:41:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:24.513 { 00:20:24.513 "params": { 00:20:24.513 "name": "Nvme$subsystem", 00:20:24.513 "trtype": "$TEST_TRANSPORT", 00:20:24.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:24.513 "adrfam": "ipv4", 00:20:24.513 "trsvcid": "$NVMF_PORT", 00:20:24.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:24.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:24.513 "hdgst": ${hdgst:-false}, 00:20:24.513 "ddgst": ${ddgst:-false} 00:20:24.513 }, 00:20:24.513 "method": "bdev_nvme_attach_controller" 00:20:24.513 } 00:20:24.513 EOF 00:20:24.513 )") 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:24.513 "params": { 00:20:24.513 "name": "Nvme0", 00:20:24.513 "trtype": "tcp", 00:20:24.513 "traddr": "10.0.0.3", 00:20:24.513 "adrfam": "ipv4", 00:20:24.513 "trsvcid": "4420", 00:20:24.513 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:24.513 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:24.513 "hdgst": false, 00:20:24.513 "ddgst": false 00:20:24.513 }, 00:20:24.513 "method": "bdev_nvme_attach_controller" 00:20:24.513 }' 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:24.513 20:41:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:24.772 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:24.772 ... 00:20:24.772 fio-3.35 00:20:24.772 Starting 3 threads 00:20:31.383 00:20:31.383 filename0: (groupid=0, jobs=1): err= 0: pid=83514: Tue Nov 26 20:41:30 2024 00:20:31.383 read: IOPS=249, BW=31.2MiB/s (32.7MB/s)(156MiB/5007msec) 00:20:31.383 slat (nsec): min=6872, max=63400, avg=10591.33, stdev=4680.84 00:20:31.383 clat (usec): min=4360, max=12617, avg=12008.47, stdev=495.98 00:20:31.383 lat (usec): min=4369, max=12633, avg=12019.06, stdev=495.63 00:20:31.383 clat percentiles (usec): 00:20:31.383 | 1.00th=[10814], 5.00th=[11076], 10.00th=[11731], 20.00th=[11863], 00:20:31.383 | 30.00th=[11994], 40.00th=[11994], 50.00th=[12125], 60.00th=[12125], 00:20:31.383 | 70.00th=[12256], 80.00th=[12256], 90.00th=[12387], 95.00th=[12387], 00:20:31.383 | 99.00th=[12518], 99.50th=[12518], 99.90th=[12649], 99.95th=[12649], 00:20:31.383 | 99.99th=[12649] 00:20:31.383 bw ( KiB/s): min=31488, max=33792, per=33.35%, avg=31872.00, stdev=746.36, samples=10 00:20:31.383 iops : min= 246, max= 264, avg=249.00, stdev= 5.83, samples=10 00:20:31.383 lat (msec) : 10=0.24%, 20=99.76% 00:20:31.383 cpu : usr=90.47%, sys=8.89%, ctx=70, majf=0, minf=0 00:20:31.383 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:31.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.383 issued rwts: total=1248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:31.383 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:31.383 filename0: (groupid=0, jobs=1): err= 0: pid=83515: Tue Nov 26 20:41:30 2024 00:20:31.383 read: IOPS=248, BW=31.1MiB/s (32.6MB/s)(156MiB/5003msec) 00:20:31.383 slat (nsec): min=7261, max=81855, avg=14340.18, stdev=4674.52 00:20:31.383 clat (usec): min=10326, max=12673, avg=12021.31, stdev=331.71 00:20:31.383 lat (usec): min=10340, max=12726, avg=12035.65, stdev=331.59 00:20:31.383 clat percentiles (usec): 00:20:31.383 | 1.00th=[10945], 5.00th=[11076], 10.00th=[11731], 20.00th=[11863], 00:20:31.383 | 30.00th=[11994], 40.00th=[11994], 50.00th=[12125], 60.00th=[12125], 00:20:31.383 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12387], 95.00th=[12387], 00:20:31.383 | 99.00th=[12518], 99.50th=[12649], 99.90th=[12649], 99.95th=[12649], 00:20:31.383 | 99.99th=[12649] 00:20:31.383 bw ( KiB/s): min=31488, max=33024, per=33.40%, avg=31914.67, stdev=557.94, samples=9 00:20:31.383 iops : min= 246, max= 258, avg=249.33, stdev= 4.36, samples=9 00:20:31.383 lat (msec) : 20=100.00% 00:20:31.383 cpu : usr=90.80%, sys=8.36%, ctx=66, majf=0, minf=0 00:20:31.383 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:31.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.383 issued rwts: total=1245,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:31.383 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:31.383 filename0: (groupid=0, jobs=1): err= 0: pid=83516: Tue Nov 26 20:41:30 2024 00:20:31.383 read: IOPS=248, BW=31.1MiB/s (32.6MB/s)(156MiB/5002msec) 00:20:31.383 slat (nsec): min=6940, max=54586, avg=14803.74, stdev=4487.09 00:20:31.383 clat (usec): min=10325, max=12653, avg=12019.08, stdev=329.90 00:20:31.383 lat (usec): min=10338, max=12668, avg=12033.89, stdev=330.06 00:20:31.383 clat percentiles (usec): 00:20:31.383 | 1.00th=[10945], 5.00th=[11076], 10.00th=[11731], 20.00th=[11863], 00:20:31.383 | 30.00th=[11994], 40.00th=[11994], 50.00th=[12125], 60.00th=[12125], 00:20:31.383 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12387], 95.00th=[12387], 00:20:31.383 | 99.00th=[12518], 99.50th=[12518], 99.90th=[12649], 99.95th=[12649], 00:20:31.383 | 99.99th=[12649] 00:20:31.383 bw ( KiB/s): min=31488, max=33024, per=33.40%, avg=31914.67, stdev=557.94, samples=9 00:20:31.383 iops : min= 246, max= 258, avg=249.33, stdev= 4.36, samples=9 00:20:31.383 lat (msec) : 20=100.00% 00:20:31.383 cpu : usr=91.26%, sys=8.18%, ctx=8, majf=0, minf=0 00:20:31.383 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:31.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.383 issued rwts: total=1245,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:31.383 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:31.383 00:20:31.383 Run status group 0 (all jobs): 00:20:31.383 READ: bw=93.3MiB/s (97.9MB/s), 31.1MiB/s-31.2MiB/s (32.6MB/s-32.7MB/s), io=467MiB (490MB), run=5002-5007msec 00:20:31.383 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:31.383 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:31.383 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:31.383 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:31.383 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:31.383 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:31.383 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.383 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.383 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.383 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:31.383 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.383 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.383 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.383 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:31.383 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:31.383 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:31.383 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.384 bdev_null0 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.384 [2024-11-26 20:41:30.794278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.384 bdev_null1 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.384 bdev_null2 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:31.384 { 00:20:31.384 "params": { 00:20:31.384 "name": "Nvme$subsystem", 00:20:31.384 "trtype": "$TEST_TRANSPORT", 00:20:31.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.384 "adrfam": "ipv4", 00:20:31.384 "trsvcid": "$NVMF_PORT", 00:20:31.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.384 "hdgst": ${hdgst:-false}, 00:20:31.384 "ddgst": ${ddgst:-false} 00:20:31.384 }, 00:20:31.384 "method": "bdev_nvme_attach_controller" 00:20:31.384 } 00:20:31.384 EOF 00:20:31.384 )") 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:31.384 { 00:20:31.384 "params": { 00:20:31.384 "name": "Nvme$subsystem", 00:20:31.384 "trtype": "$TEST_TRANSPORT", 00:20:31.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.384 "adrfam": "ipv4", 00:20:31.384 "trsvcid": "$NVMF_PORT", 00:20:31.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.384 "hdgst": ${hdgst:-false}, 00:20:31.384 "ddgst": ${ddgst:-false} 00:20:31.384 }, 00:20:31.384 "method": "bdev_nvme_attach_controller" 00:20:31.384 } 00:20:31.384 EOF 00:20:31.384 )") 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:31.384 { 00:20:31.384 "params": { 00:20:31.384 "name": "Nvme$subsystem", 00:20:31.384 "trtype": "$TEST_TRANSPORT", 00:20:31.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.384 "adrfam": "ipv4", 00:20:31.384 "trsvcid": "$NVMF_PORT", 00:20:31.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.384 "hdgst": ${hdgst:-false}, 00:20:31.384 "ddgst": ${ddgst:-false} 00:20:31.384 }, 00:20:31.384 "method": "bdev_nvme_attach_controller" 00:20:31.384 } 00:20:31.384 EOF 00:20:31.384 )") 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:31.384 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:31.385 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:31.385 20:41:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:31.385 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:31.385 20:41:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:31.385 "params": { 00:20:31.385 "name": "Nvme0", 00:20:31.385 "trtype": "tcp", 00:20:31.385 "traddr": "10.0.0.3", 00:20:31.385 "adrfam": "ipv4", 00:20:31.385 "trsvcid": "4420", 00:20:31.385 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:31.385 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:31.385 "hdgst": false, 00:20:31.385 "ddgst": false 00:20:31.385 }, 00:20:31.385 "method": "bdev_nvme_attach_controller" 00:20:31.385 },{ 00:20:31.385 "params": { 00:20:31.385 "name": "Nvme1", 00:20:31.385 "trtype": "tcp", 00:20:31.385 "traddr": "10.0.0.3", 00:20:31.385 "adrfam": "ipv4", 00:20:31.385 "trsvcid": "4420", 00:20:31.385 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.385 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:31.385 "hdgst": false, 00:20:31.385 "ddgst": false 00:20:31.385 }, 00:20:31.385 "method": "bdev_nvme_attach_controller" 00:20:31.385 },{ 00:20:31.385 "params": { 00:20:31.385 "name": "Nvme2", 00:20:31.385 "trtype": "tcp", 00:20:31.385 "traddr": "10.0.0.3", 00:20:31.385 "adrfam": "ipv4", 00:20:31.385 "trsvcid": "4420", 00:20:31.385 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:31.385 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:31.385 "hdgst": false, 00:20:31.385 "ddgst": false 00:20:31.385 }, 00:20:31.385 "method": "bdev_nvme_attach_controller" 00:20:31.385 }' 00:20:31.385 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:31.385 20:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:31.385 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:31.385 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:31.385 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:31.385 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:31.385 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:31.385 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:31.385 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:31.385 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:31.385 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:31.385 20:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:31.385 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:31.385 ... 00:20:31.385 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:31.385 ... 00:20:31.385 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:31.385 ... 00:20:31.385 fio-3.35 00:20:31.385 Starting 24 threads 00:20:43.675 00:20:43.675 filename0: (groupid=0, jobs=1): err= 0: pid=83611: Tue Nov 26 20:41:41 2024 00:20:43.675 read: IOPS=227, BW=911KiB/s (933kB/s)(9136KiB/10027msec) 00:20:43.675 slat (usec): min=4, max=8071, avg=39.03, stdev=384.92 00:20:43.675 clat (msec): min=13, max=141, avg=70.05, stdev=22.03 00:20:43.675 lat (msec): min=13, max=141, avg=70.08, stdev=22.02 00:20:43.675 clat percentiles (msec): 00:20:43.675 | 1.00th=[ 24], 5.00th=[ 37], 10.00th=[ 47], 20.00th=[ 50], 00:20:43.675 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 73], 00:20:43.675 | 70.00th=[ 82], 80.00th=[ 86], 90.00th=[ 97], 95.00th=[ 112], 00:20:43.675 | 99.00th=[ 130], 99.50th=[ 132], 99.90th=[ 140], 99.95th=[ 140], 00:20:43.675 | 99.99th=[ 142] 00:20:43.675 bw ( KiB/s): min= 584, max= 1464, per=4.30%, avg=909.10, stdev=172.41, samples=20 00:20:43.675 iops : min= 146, max= 366, avg=227.25, stdev=43.10, samples=20 00:20:43.675 lat (msec) : 20=0.26%, 50=20.71%, 100=70.10%, 250=8.93% 00:20:43.675 cpu : usr=36.33%, sys=1.74%, ctx=1113, majf=0, minf=9 00:20:43.675 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.1%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:43.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.675 complete : 0=0.0%, 4=87.1%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.675 issued rwts: total=2284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.675 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:43.675 filename0: (groupid=0, jobs=1): err= 0: pid=83612: Tue Nov 26 20:41:41 2024 00:20:43.675 read: IOPS=230, BW=921KiB/s (943kB/s)(9212KiB/10006msec) 00:20:43.675 slat (usec): min=5, max=5059, avg=25.18, stdev=155.75 00:20:43.675 clat (msec): min=14, max=135, avg=69.39, stdev=22.15 00:20:43.675 lat (msec): min=14, max=135, avg=69.42, stdev=22.15 00:20:43.675 clat percentiles (msec): 00:20:43.675 | 1.00th=[ 25], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 50], 00:20:43.675 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 73], 00:20:43.675 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 112], 00:20:43.675 | 99.00th=[ 128], 99.50th=[ 131], 99.90th=[ 136], 99.95th=[ 136], 00:20:43.675 | 99.99th=[ 136] 00:20:43.675 bw ( KiB/s): min= 616, max= 1344, per=4.30%, avg=911.58, stdev=159.83, samples=19 00:20:43.675 iops : min= 154, max= 336, avg=227.89, stdev=39.96, samples=19 00:20:43.675 lat (msec) : 20=0.17%, 50=23.27%, 100=68.04%, 250=8.51% 00:20:43.675 cpu : usr=37.99%, sys=1.98%, ctx=1116, majf=0, minf=9 00:20:43.675 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.8%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:43.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.675 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.675 issued rwts: total=2303,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.675 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:43.675 filename0: (groupid=0, jobs=1): err= 0: pid=83613: Tue Nov 26 20:41:41 2024 00:20:43.675 read: IOPS=212, BW=852KiB/s (872kB/s)(8564KiB/10057msec) 00:20:43.675 slat (usec): min=4, max=5039, avg=25.54, stdev=170.89 00:20:43.675 clat (msec): min=4, max=136, avg=74.87, stdev=26.63 00:20:43.675 lat (msec): min=4, max=136, avg=74.89, stdev=26.63 00:20:43.675 clat percentiles (msec): 00:20:43.675 | 1.00th=[ 5], 5.00th=[ 22], 10.00th=[ 43], 20.00th=[ 57], 00:20:43.675 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 81], 00:20:43.675 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 109], 95.00th=[ 121], 00:20:43.675 | 99.00th=[ 133], 99.50th=[ 138], 99.90th=[ 138], 99.95th=[ 138], 00:20:43.675 | 99.99th=[ 138] 00:20:43.675 bw ( KiB/s): min= 560, max= 2160, per=4.01%, avg=849.90, stdev=325.86, samples=20 00:20:43.675 iops : min= 140, max= 540, avg=212.45, stdev=81.47, samples=20 00:20:43.675 lat (msec) : 10=2.24%, 20=2.24%, 50=9.76%, 100=69.41%, 250=16.35% 00:20:43.675 cpu : usr=40.19%, sys=1.78%, ctx=1487, majf=0, minf=0 00:20:43.675 IO depths : 1=0.1%, 2=2.3%, 4=9.1%, 8=73.0%, 16=15.4%, 32=0.0%, >=64=0.0% 00:20:43.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.675 complete : 0=0.0%, 4=90.2%, 8=7.8%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.675 issued rwts: total=2141,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.675 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:43.675 filename0: (groupid=0, jobs=1): err= 0: pid=83614: Tue Nov 26 20:41:41 2024 00:20:43.675 read: IOPS=207, BW=829KiB/s (848kB/s)(8332KiB/10056msec) 00:20:43.675 slat (usec): min=5, max=8021, avg=22.18, stdev=196.26 00:20:43.675 clat (usec): min=944, max=171210, avg=76997.18, stdev=28877.78 00:20:43.675 lat (usec): min=955, max=171219, avg=77019.36, stdev=28877.77 00:20:43.675 clat percentiles (msec): 00:20:43.675 | 1.00th=[ 5], 5.00th=[ 15], 10.00th=[ 30], 20.00th=[ 59], 00:20:43.675 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 85], 00:20:43.675 | 70.00th=[ 91], 80.00th=[ 101], 90.00th=[ 113], 95.00th=[ 121], 00:20:43.675 | 99.00th=[ 138], 99.50th=[ 140], 99.90th=[ 142], 99.95th=[ 155], 00:20:43.675 | 99.99th=[ 171] 00:20:43.675 bw ( KiB/s): min= 568, max= 2285, per=3.91%, avg=828.25, stdev=359.01, samples=20 00:20:43.675 iops : min= 142, max= 571, avg=207.05, stdev=89.70, samples=20 00:20:43.675 lat (usec) : 1000=0.10% 00:20:43.675 lat (msec) : 10=2.98%, 20=3.46%, 50=8.31%, 100=65.29%, 250=19.88% 00:20:43.675 cpu : usr=39.68%, sys=1.71%, ctx=1096, majf=0, minf=9 00:20:43.675 IO depths : 1=0.1%, 2=2.8%, 4=11.3%, 8=70.6%, 16=15.2%, 32=0.0%, >=64=0.0% 00:20:43.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.675 complete : 0=0.0%, 4=90.9%, 8=6.6%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.675 issued rwts: total=2083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.675 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:43.675 filename0: (groupid=0, jobs=1): err= 0: pid=83615: Tue Nov 26 20:41:41 2024 00:20:43.675 read: IOPS=218, BW=875KiB/s (896kB/s)(8768KiB/10016msec) 00:20:43.675 slat (usec): min=5, max=7992, avg=26.87, stdev=209.29 00:20:43.675 clat (msec): min=15, max=135, avg=72.97, stdev=21.86 00:20:43.675 lat (msec): min=15, max=135, avg=73.00, stdev=21.86 00:20:43.675 clat percentiles (msec): 00:20:43.675 | 1.00th=[ 24], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 54], 00:20:43.675 | 30.00th=[ 59], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 78], 00:20:43.675 | 70.00th=[ 82], 80.00th=[ 88], 90.00th=[ 104], 95.00th=[ 115], 00:20:43.675 | 99.00th=[ 131], 99.50th=[ 134], 99.90th=[ 136], 99.95th=[ 136], 00:20:43.675 | 99.99th=[ 136] 00:20:43.676 bw ( KiB/s): min= 656, max= 1024, per=4.08%, avg=863.16, stdev=123.50, samples=19 00:20:43.676 iops : min= 164, max= 256, avg=215.79, stdev=30.88, samples=19 00:20:43.676 lat (msec) : 20=0.55%, 50=15.97%, 100=72.31%, 250=11.18% 00:20:43.676 cpu : usr=43.31%, sys=1.82%, ctx=1314, majf=0, minf=9 00:20:43.676 IO depths : 1=0.1%, 2=1.2%, 4=4.5%, 8=79.2%, 16=15.1%, 32=0.0%, >=64=0.0% 00:20:43.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.676 complete : 0=0.0%, 4=88.0%, 8=11.1%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.676 issued rwts: total=2192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.676 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:43.676 filename0: (groupid=0, jobs=1): err= 0: pid=83616: Tue Nov 26 20:41:41 2024 00:20:43.676 read: IOPS=216, BW=867KiB/s (888kB/s)(8704KiB/10035msec) 00:20:43.676 slat (usec): min=4, max=4055, avg=22.53, stdev=148.82 00:20:43.676 clat (msec): min=20, max=155, avg=73.63, stdev=22.93 00:20:43.676 lat (msec): min=21, max=155, avg=73.66, stdev=22.93 00:20:43.676 clat percentiles (msec): 00:20:43.676 | 1.00th=[ 25], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 55], 00:20:43.676 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 80], 00:20:43.676 | 70.00th=[ 84], 80.00th=[ 89], 90.00th=[ 107], 95.00th=[ 117], 00:20:43.676 | 99.00th=[ 133], 99.50th=[ 136], 99.90th=[ 142], 99.95th=[ 144], 00:20:43.676 | 99.99th=[ 155] 00:20:43.676 bw ( KiB/s): min= 536, max= 1523, per=4.08%, avg=864.15, stdev=195.87, samples=20 00:20:43.676 iops : min= 134, max= 380, avg=216.00, stdev=48.83, samples=20 00:20:43.676 lat (msec) : 50=16.96%, 100=70.96%, 250=12.09% 00:20:43.676 cpu : usr=36.25%, sys=1.53%, ctx=1134, majf=0, minf=9 00:20:43.676 IO depths : 1=0.1%, 2=0.8%, 4=3.4%, 8=79.5%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:43.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.676 complete : 0=0.0%, 4=88.4%, 8=10.8%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.676 issued rwts: total=2176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.676 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:43.676 filename0: (groupid=0, jobs=1): err= 0: pid=83617: Tue Nov 26 20:41:41 2024 00:20:43.676 read: IOPS=232, BW=930KiB/s (952kB/s)(9300KiB/10003msec) 00:20:43.676 slat (usec): min=4, max=8039, avg=31.59, stdev=310.62 00:20:43.676 clat (usec): min=1795, max=136450, avg=68714.39, stdev=23200.76 00:20:43.676 lat (usec): min=1804, max=136483, avg=68745.98, stdev=23208.16 00:20:43.676 clat percentiles (msec): 00:20:43.676 | 1.00th=[ 4], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 48], 00:20:43.676 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 73], 00:20:43.676 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 111], 00:20:43.676 | 99.00th=[ 130], 99.50th=[ 134], 99.90th=[ 138], 99.95th=[ 138], 00:20:43.676 | 99.99th=[ 138] 00:20:43.676 bw ( KiB/s): min= 616, max= 1112, per=4.27%, avg=904.84, stdev=128.22, samples=19 00:20:43.676 iops : min= 154, max= 278, avg=226.21, stdev=32.05, samples=19 00:20:43.676 lat (msec) : 2=0.17%, 4=1.25%, 10=0.39%, 20=0.13%, 50=23.74% 00:20:43.676 lat (msec) : 100=65.98%, 250=8.34% 00:20:43.676 cpu : usr=35.68%, sys=1.79%, ctx=1124, majf=0, minf=9 00:20:43.676 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.9%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:43.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.676 complete : 0=0.0%, 4=87.0%, 8=12.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.676 issued rwts: total=2325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.676 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:43.676 filename0: (groupid=0, jobs=1): err= 0: pid=83618: Tue Nov 26 20:41:41 2024 00:20:43.676 read: IOPS=214, BW=858KiB/s (878kB/s)(8580KiB/10002msec) 00:20:43.676 slat (usec): min=7, max=8038, avg=38.56, stdev=368.66 00:20:43.676 clat (msec): min=4, max=142, avg=74.40, stdev=22.57 00:20:43.676 lat (msec): min=4, max=142, avg=74.44, stdev=22.57 00:20:43.676 clat percentiles (msec): 00:20:43.676 | 1.00th=[ 25], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 56], 00:20:43.676 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 79], 00:20:43.676 | 70.00th=[ 83], 80.00th=[ 92], 90.00th=[ 105], 95.00th=[ 115], 00:20:43.676 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:20:43.676 | 99.99th=[ 144] 00:20:43.676 bw ( KiB/s): min= 616, max= 1152, per=3.99%, avg=845.16, stdev=130.65, samples=19 00:20:43.676 iops : min= 154, max= 288, avg=211.26, stdev=32.65, samples=19 00:20:43.676 lat (msec) : 10=0.28%, 20=0.42%, 50=13.19%, 100=73.89%, 250=12.21% 00:20:43.676 cpu : usr=38.50%, sys=1.89%, ctx=1657, majf=0, minf=9 00:20:43.676 IO depths : 1=0.1%, 2=2.1%, 4=8.3%, 8=74.8%, 16=14.7%, 32=0.0%, >=64=0.0% 00:20:43.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.676 complete : 0=0.0%, 4=89.2%, 8=8.9%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.676 issued rwts: total=2145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.676 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:43.676 filename1: (groupid=0, jobs=1): err= 0: pid=83619: Tue Nov 26 20:41:41 2024 00:20:43.676 read: IOPS=249, BW=997KiB/s (1021kB/s)(9.80MiB/10063msec) 00:20:43.676 slat (usec): min=7, max=8019, avg=20.01, stdev=178.88 00:20:43.676 clat (usec): min=1001, max=135643, avg=63923.60, stdev=34377.04 00:20:43.676 lat (usec): min=1011, max=135656, avg=63943.61, stdev=34378.45 00:20:43.676 clat percentiles (usec): 00:20:43.676 | 1.00th=[ 1614], 5.00th=[ 1696], 10.00th=[ 1778], 20.00th=[ 28705], 00:20:43.676 | 30.00th=[ 54789], 40.00th=[ 66847], 50.00th=[ 71828], 60.00th=[ 76022], 00:20:43.676 | 70.00th=[ 82314], 80.00th=[ 88605], 90.00th=[104334], 95.00th=[113771], 00:20:43.676 | 99.00th=[130548], 99.50th=[132645], 99.90th=[133694], 99.95th=[133694], 00:20:43.676 | 99.99th=[135267] 00:20:43.676 bw ( KiB/s): min= 584, max= 4698, per=4.70%, avg=994.95, stdev=877.94, samples=20 00:20:43.676 iops : min= 146, max= 1174, avg=248.70, stdev=219.38, samples=20 00:20:43.676 lat (msec) : 2=11.56%, 4=1.28%, 10=2.47%, 20=1.91%, 50=9.92% 00:20:43.676 lat (msec) : 100=61.90%, 250=10.96% 00:20:43.676 cpu : usr=40.87%, sys=2.01%, ctx=1157, majf=0, minf=0 00:20:43.676 IO depths : 1=0.9%, 2=3.3%, 4=10.0%, 8=71.5%, 16=14.3%, 32=0.0%, >=64=0.0% 00:20:43.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.676 complete : 0=0.0%, 4=90.2%, 8=7.6%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.676 issued rwts: total=2509,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.676 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:43.676 filename1: (groupid=0, jobs=1): err= 0: pid=83620: Tue Nov 26 20:41:41 2024 00:20:43.676 read: IOPS=217, BW=869KiB/s (890kB/s)(8716KiB/10031msec) 00:20:43.676 slat (usec): min=5, max=8040, avg=41.03, stdev=364.29 00:20:43.676 clat (msec): min=10, max=144, avg=73.42, stdev=22.67 00:20:43.676 lat (msec): min=10, max=144, avg=73.46, stdev=22.69 00:20:43.676 clat percentiles (msec): 00:20:43.676 | 1.00th=[ 22], 5.00th=[ 32], 10.00th=[ 48], 20.00th=[ 56], 00:20:43.676 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 79], 00:20:43.676 | 70.00th=[ 83], 80.00th=[ 89], 90.00th=[ 105], 95.00th=[ 117], 00:20:43.676 | 99.00th=[ 131], 99.50th=[ 133], 99.90th=[ 136], 99.95th=[ 136], 00:20:43.676 | 99.99th=[ 144] 00:20:43.676 bw ( KiB/s): min= 585, max= 1526, per=4.10%, avg=867.40, stdev=190.80, samples=20 00:20:43.676 iops : min= 146, max= 381, avg=216.75, stdev=47.64, samples=20 00:20:43.676 lat (msec) : 20=0.83%, 50=12.67%, 100=75.26%, 250=11.24% 00:20:43.676 cpu : usr=40.66%, sys=1.82%, ctx=1245, majf=0, minf=9 00:20:43.676 IO depths : 1=0.1%, 2=1.3%, 4=5.1%, 8=77.8%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:43.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.676 complete : 0=0.0%, 4=88.7%, 8=10.2%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.676 issued rwts: total=2179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.676 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:43.676 filename1: (groupid=0, jobs=1): err= 0: pid=83621: Tue Nov 26 20:41:41 2024 00:20:43.676 read: IOPS=218, BW=873KiB/s (894kB/s)(8752KiB/10027msec) 00:20:43.676 slat (usec): min=4, max=8027, avg=29.16, stdev=270.81 00:20:43.676 clat (msec): min=23, max=140, avg=73.12, stdev=21.89 00:20:43.676 lat (msec): min=23, max=140, avg=73.15, stdev=21.90 00:20:43.676 clat percentiles (msec): 00:20:43.676 | 1.00th=[ 28], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 52], 00:20:43.676 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 77], 00:20:43.676 | 70.00th=[ 84], 80.00th=[ 91], 90.00th=[ 103], 95.00th=[ 115], 00:20:43.676 | 99.00th=[ 131], 99.50th=[ 131], 99.90th=[ 142], 99.95th=[ 142], 00:20:43.676 | 99.99th=[ 142] 00:20:43.676 bw ( KiB/s): min= 616, max= 1024, per=4.11%, avg=870.85, stdev=126.61, samples=20 00:20:43.676 iops : min= 154, max= 256, avg=217.70, stdev=31.64, samples=20 00:20:43.676 lat (msec) : 50=17.64%, 100=71.53%, 250=10.83% 00:20:43.676 cpu : usr=36.15%, sys=1.64%, ctx=1170, majf=0, minf=9 00:20:43.676 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=80.8%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:43.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.676 complete : 0=0.0%, 4=87.7%, 8=11.7%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.676 issued rwts: total=2188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.676 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:43.676 filename1: (groupid=0, jobs=1): err= 0: pid=83622: Tue Nov 26 20:41:41 2024 00:20:43.676 read: IOPS=220, BW=882KiB/s (903kB/s)(8848KiB/10032msec) 00:20:43.676 slat (usec): min=5, max=8046, avg=32.64, stdev=346.36 00:20:43.676 clat (msec): min=20, max=134, avg=72.39, stdev=21.14 00:20:43.677 lat (msec): min=20, max=134, avg=72.42, stdev=21.15 00:20:43.677 clat percentiles (msec): 00:20:43.677 | 1.00th=[ 36], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 53], 00:20:43.677 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 75], 00:20:43.677 | 70.00th=[ 83], 80.00th=[ 87], 90.00th=[ 101], 95.00th=[ 114], 00:20:43.677 | 99.00th=[ 130], 99.50th=[ 134], 99.90th=[ 136], 99.95th=[ 136], 00:20:43.677 | 99.99th=[ 136] 00:20:43.677 bw ( KiB/s): min= 560, max= 1253, per=4.14%, avg=877.85, stdev=147.45, samples=20 00:20:43.677 iops : min= 140, max= 313, avg=219.45, stdev=36.83, samples=20 00:20:43.677 lat (msec) : 50=17.63%, 100=72.06%, 250=10.31% 00:20:43.677 cpu : usr=31.54%, sys=1.28%, ctx=918, majf=0, minf=9 00:20:43.677 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.2%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:43.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.677 complete : 0=0.0%, 4=87.5%, 8=12.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.677 issued rwts: total=2212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.677 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:43.677 filename1: (groupid=0, jobs=1): err= 0: pid=83623: Tue Nov 26 20:41:41 2024 00:20:43.677 read: IOPS=216, BW=865KiB/s (886kB/s)(8700KiB/10052msec) 00:20:43.677 slat (usec): min=4, max=8042, avg=29.43, stdev=309.89 00:20:43.677 clat (msec): min=11, max=141, avg=73.75, stdev=22.55 00:20:43.677 lat (msec): min=11, max=141, avg=73.78, stdev=22.55 00:20:43.677 clat percentiles (msec): 00:20:43.677 | 1.00th=[ 17], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 56], 00:20:43.677 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 80], 00:20:43.677 | 70.00th=[ 84], 80.00th=[ 90], 90.00th=[ 106], 95.00th=[ 117], 00:20:43.677 | 99.00th=[ 131], 99.50th=[ 132], 99.90th=[ 138], 99.95th=[ 138], 00:20:43.677 | 99.99th=[ 142] 00:20:43.677 bw ( KiB/s): min= 584, max= 1552, per=4.08%, avg=864.80, stdev=194.65, samples=20 00:20:43.677 iops : min= 146, max= 388, avg=216.20, stdev=48.66, samples=20 00:20:43.677 lat (msec) : 20=2.11%, 50=12.83%, 100=73.52%, 250=11.54% 00:20:43.677 cpu : usr=37.80%, sys=1.85%, ctx=1120, majf=0, minf=9 00:20:43.677 IO depths : 1=0.1%, 2=1.0%, 4=3.6%, 8=79.2%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:43.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.677 complete : 0=0.0%, 4=88.5%, 8=10.7%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.677 issued rwts: total=2175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.677 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:43.677 filename1: (groupid=0, jobs=1): err= 0: pid=83624: Tue Nov 26 20:41:41 2024 00:20:43.677 read: IOPS=220, BW=882KiB/s (903kB/s)(8844KiB/10031msec) 00:20:43.677 slat (usec): min=6, max=11069, avg=40.31, stdev=378.88 00:20:43.677 clat (msec): min=25, max=151, avg=72.35, stdev=22.17 00:20:43.677 lat (msec): min=25, max=151, avg=72.39, stdev=22.18 00:20:43.677 clat percentiles (msec): 00:20:43.677 | 1.00th=[ 32], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 53], 00:20:43.677 | 30.00th=[ 59], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 77], 00:20:43.677 | 70.00th=[ 82], 80.00th=[ 88], 90.00th=[ 104], 95.00th=[ 118], 00:20:43.677 | 99.00th=[ 130], 99.50th=[ 134], 99.90th=[ 136], 99.95th=[ 153], 00:20:43.677 | 99.99th=[ 153] 00:20:43.677 bw ( KiB/s): min= 561, max= 1365, per=4.16%, avg=880.20, stdev=169.41, samples=20 00:20:43.677 iops : min= 140, max= 341, avg=219.95, stdev=42.35, samples=20 00:20:43.677 lat (msec) : 50=18.23%, 100=70.38%, 250=11.40% 00:20:43.677 cpu : usr=40.53%, sys=1.96%, ctx=1462, majf=0, minf=9 00:20:43.677 IO depths : 1=0.2%, 2=0.5%, 4=1.2%, 8=82.0%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:43.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.677 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.677 issued rwts: total=2211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.677 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:43.677 filename1: (groupid=0, jobs=1): err= 0: pid=83625: Tue Nov 26 20:41:41 2024 00:20:43.677 read: IOPS=216, BW=866KiB/s (886kB/s)(8672KiB/10018msec) 00:20:43.677 slat (usec): min=4, max=8060, avg=29.68, stdev=298.11 00:20:43.677 clat (msec): min=20, max=139, avg=73.79, stdev=20.61 00:20:43.677 lat (msec): min=20, max=139, avg=73.82, stdev=20.62 00:20:43.677 clat percentiles (msec): 00:20:43.677 | 1.00th=[ 38], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 55], 00:20:43.677 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 78], 00:20:43.677 | 70.00th=[ 83], 80.00th=[ 88], 90.00th=[ 101], 95.00th=[ 115], 00:20:43.677 | 99.00th=[ 129], 99.50th=[ 131], 99.90th=[ 140], 99.95th=[ 140], 00:20:43.677 | 99.99th=[ 140] 00:20:43.677 bw ( KiB/s): min= 616, max= 976, per=4.08%, avg=863.20, stdev=109.72, samples=20 00:20:43.677 iops : min= 154, max= 244, avg=215.80, stdev=27.43, samples=20 00:20:43.677 lat (msec) : 50=15.27%, 100=74.58%, 250=10.15% 00:20:43.677 cpu : usr=31.44%, sys=1.38%, ctx=935, majf=0, minf=9 00:20:43.677 IO depths : 1=0.1%, 2=1.0%, 4=3.6%, 8=79.8%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:43.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.677 complete : 0=0.0%, 4=88.0%, 8=11.2%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.677 issued rwts: total=2168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.677 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:43.677 filename1: (groupid=0, jobs=1): err= 0: pid=83626: Tue Nov 26 20:41:41 2024 00:20:43.677 read: IOPS=215, BW=862KiB/s (882kB/s)(8644KiB/10030msec) 00:20:43.677 slat (usec): min=4, max=9029, avg=37.28, stdev=355.58 00:20:43.677 clat (msec): min=18, max=136, avg=74.08, stdev=21.33 00:20:43.677 lat (msec): min=18, max=136, avg=74.11, stdev=21.34 00:20:43.677 clat percentiles (msec): 00:20:43.677 | 1.00th=[ 32], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 56], 00:20:43.677 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 79], 00:20:43.677 | 70.00th=[ 84], 80.00th=[ 91], 90.00th=[ 104], 95.00th=[ 117], 00:20:43.677 | 99.00th=[ 130], 99.50th=[ 132], 99.90th=[ 138], 99.95th=[ 138], 00:20:43.677 | 99.99th=[ 138] 00:20:43.677 bw ( KiB/s): min= 560, max= 1258, per=4.05%, avg=857.70, stdev=142.12, samples=20 00:20:43.677 iops : min= 140, max= 314, avg=214.40, stdev=35.46, samples=20 00:20:43.677 lat (msec) : 20=0.09%, 50=15.09%, 100=74.04%, 250=10.78% 00:20:43.677 cpu : usr=36.01%, sys=1.75%, ctx=979, majf=0, minf=9 00:20:43.677 IO depths : 1=0.1%, 2=1.1%, 4=4.0%, 8=79.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:43.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.677 complete : 0=0.0%, 4=88.3%, 8=10.8%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.677 issued rwts: total=2161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.677 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:43.677 filename2: (groupid=0, jobs=1): err= 0: pid=83627: Tue Nov 26 20:41:41 2024 00:20:43.677 read: IOPS=219, BW=878KiB/s (899kB/s)(8808KiB/10035msec) 00:20:43.677 slat (usec): min=4, max=8018, avg=23.37, stdev=190.92 00:20:43.677 clat (msec): min=14, max=150, avg=72.76, stdev=22.03 00:20:43.677 lat (msec): min=14, max=150, avg=72.79, stdev=22.03 00:20:43.677 clat percentiles (msec): 00:20:43.677 | 1.00th=[ 24], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 53], 00:20:43.677 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 79], 00:20:43.677 | 70.00th=[ 84], 80.00th=[ 88], 90.00th=[ 104], 95.00th=[ 115], 00:20:43.677 | 99.00th=[ 129], 99.50th=[ 132], 99.90th=[ 136], 99.95th=[ 136], 00:20:43.677 | 99.99th=[ 150] 00:20:43.677 bw ( KiB/s): min= 568, max= 1520, per=4.13%, avg=874.40, stdev=189.96, samples=20 00:20:43.677 iops : min= 142, max= 380, avg=218.60, stdev=47.49, samples=20 00:20:43.677 lat (msec) : 20=0.09%, 50=16.67%, 100=72.39%, 250=10.85% 00:20:43.677 cpu : usr=40.87%, sys=2.02%, ctx=1235, majf=0, minf=9 00:20:43.677 IO depths : 1=0.1%, 2=1.0%, 4=4.1%, 8=79.0%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:43.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.677 complete : 0=0.0%, 4=88.4%, 8=10.7%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.677 issued rwts: total=2202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.677 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:43.677 filename2: (groupid=0, jobs=1): err= 0: pid=83628: Tue Nov 26 20:41:41 2024 00:20:43.677 read: IOPS=222, BW=891KiB/s (912kB/s)(8944KiB/10037msec) 00:20:43.677 slat (usec): min=4, max=4918, avg=19.74, stdev=104.17 00:20:43.677 clat (msec): min=12, max=154, avg=71.66, stdev=23.67 00:20:43.677 lat (msec): min=12, max=154, avg=71.68, stdev=23.67 00:20:43.677 clat percentiles (msec): 00:20:43.677 | 1.00th=[ 14], 5.00th=[ 27], 10.00th=[ 45], 20.00th=[ 54], 00:20:43.677 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 77], 00:20:43.677 | 70.00th=[ 83], 80.00th=[ 89], 90.00th=[ 102], 95.00th=[ 115], 00:20:43.677 | 99.00th=[ 129], 99.50th=[ 136], 99.90th=[ 138], 99.95th=[ 138], 00:20:43.677 | 99.99th=[ 155] 00:20:43.677 bw ( KiB/s): min= 584, max= 1763, per=4.20%, avg=888.15, stdev=234.12, samples=20 00:20:43.677 iops : min= 146, max= 440, avg=222.00, stdev=58.38, samples=20 00:20:43.677 lat (msec) : 20=2.19%, 50=14.71%, 100=72.54%, 250=10.55% 00:20:43.677 cpu : usr=40.02%, sys=1.76%, ctx=1201, majf=0, minf=9 00:20:43.677 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.0%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:43.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.677 complete : 0=0.0%, 4=87.8%, 8=12.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.677 issued rwts: total=2236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.677 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:43.677 filename2: (groupid=0, jobs=1): err= 0: pid=83629: Tue Nov 26 20:41:41 2024 00:20:43.677 read: IOPS=223, BW=893KiB/s (915kB/s)(8960KiB/10031msec) 00:20:43.677 slat (usec): min=4, max=8029, avg=23.74, stdev=183.66 00:20:43.677 clat (msec): min=13, max=137, avg=71.47, stdev=23.06 00:20:43.677 lat (msec): min=13, max=137, avg=71.50, stdev=23.06 00:20:43.677 clat percentiles (msec): 00:20:43.677 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 51], 00:20:43.677 | 30.00th=[ 59], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 75], 00:20:43.677 | 70.00th=[ 83], 80.00th=[ 87], 90.00th=[ 102], 95.00th=[ 115], 00:20:43.677 | 99.00th=[ 129], 99.50th=[ 133], 99.90th=[ 138], 99.95th=[ 138], 00:20:43.677 | 99.99th=[ 138] 00:20:43.677 bw ( KiB/s): min= 569, max= 1614, per=4.21%, avg=891.85, stdev=214.70, samples=20 00:20:43.677 iops : min= 142, max= 403, avg=222.85, stdev=53.62, samples=20 00:20:43.678 lat (msec) : 20=0.80%, 50=18.88%, 100=70.22%, 250=10.09% 00:20:43.678 cpu : usr=33.01%, sys=1.68%, ctx=932, majf=0, minf=9 00:20:43.678 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.1%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:43.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.678 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.678 issued rwts: total=2240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.678 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:43.678 filename2: (groupid=0, jobs=1): err= 0: pid=83630: Tue Nov 26 20:41:41 2024 00:20:43.678 read: IOPS=228, BW=914KiB/s (936kB/s)(9152KiB/10012msec) 00:20:43.678 slat (usec): min=4, max=8039, avg=47.03, stdev=473.19 00:20:43.678 clat (msec): min=12, max=140, avg=69.81, stdev=22.00 00:20:43.678 lat (msec): min=13, max=140, avg=69.86, stdev=22.00 00:20:43.678 clat percentiles (msec): 00:20:43.678 | 1.00th=[ 24], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 50], 00:20:43.678 | 30.00th=[ 56], 40.00th=[ 62], 50.00th=[ 71], 60.00th=[ 73], 00:20:43.678 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 97], 95.00th=[ 111], 00:20:43.678 | 99.00th=[ 130], 99.50th=[ 138], 99.90th=[ 140], 99.95th=[ 140], 00:20:43.678 | 99.99th=[ 140] 00:20:43.678 bw ( KiB/s): min= 616, max= 1264, per=4.27%, avg=903.68, stdev=146.19, samples=19 00:20:43.678 iops : min= 154, max= 316, avg=225.89, stdev=36.53, samples=19 00:20:43.678 lat (msec) : 20=0.48%, 50=22.20%, 100=67.70%, 250=9.62% 00:20:43.678 cpu : usr=33.37%, sys=1.60%, ctx=904, majf=0, minf=9 00:20:43.678 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.4%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:43.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.678 complete : 0=0.0%, 4=87.0%, 8=12.9%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.678 issued rwts: total=2288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.678 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:43.678 filename2: (groupid=0, jobs=1): err= 0: pid=83631: Tue Nov 26 20:41:41 2024 00:20:43.678 read: IOPS=222, BW=889KiB/s (910kB/s)(8900KiB/10016msec) 00:20:43.678 slat (usec): min=4, max=8060, avg=40.70, stdev=398.17 00:20:43.678 clat (msec): min=16, max=140, avg=71.87, stdev=21.19 00:20:43.678 lat (msec): min=16, max=140, avg=71.91, stdev=21.19 00:20:43.678 clat percentiles (msec): 00:20:43.678 | 1.00th=[ 35], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 52], 00:20:43.678 | 30.00th=[ 59], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 75], 00:20:43.678 | 70.00th=[ 82], 80.00th=[ 86], 90.00th=[ 100], 95.00th=[ 115], 00:20:43.678 | 99.00th=[ 129], 99.50th=[ 131], 99.90th=[ 136], 99.95th=[ 136], 00:20:43.678 | 99.99th=[ 140] 00:20:43.678 bw ( KiB/s): min= 608, max= 1026, per=4.15%, avg=878.42, stdev=118.03, samples=19 00:20:43.678 iops : min= 152, max= 256, avg=219.58, stdev=29.47, samples=19 00:20:43.678 lat (msec) : 20=0.31%, 50=17.62%, 100=72.27%, 250=9.80% 00:20:43.678 cpu : usr=38.43%, sys=1.76%, ctx=1051, majf=0, minf=9 00:20:43.678 IO depths : 1=0.1%, 2=0.7%, 4=2.3%, 8=81.3%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:43.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.678 complete : 0=0.0%, 4=87.5%, 8=12.0%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.678 issued rwts: total=2225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.678 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:43.678 filename2: (groupid=0, jobs=1): err= 0: pid=83632: Tue Nov 26 20:41:41 2024 00:20:43.678 read: IOPS=223, BW=896KiB/s (917kB/s)(8976KiB/10023msec) 00:20:43.678 slat (usec): min=5, max=7045, avg=28.56, stdev=224.60 00:20:43.678 clat (msec): min=24, max=137, avg=71.31, stdev=21.46 00:20:43.678 lat (msec): min=24, max=137, avg=71.34, stdev=21.46 00:20:43.678 clat percentiles (msec): 00:20:43.678 | 1.00th=[ 31], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 51], 00:20:43.678 | 30.00th=[ 58], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 74], 00:20:43.678 | 70.00th=[ 81], 80.00th=[ 86], 90.00th=[ 100], 95.00th=[ 113], 00:20:43.678 | 99.00th=[ 131], 99.50th=[ 133], 99.90th=[ 136], 99.95th=[ 136], 00:20:43.678 | 99.99th=[ 138] 00:20:43.678 bw ( KiB/s): min= 640, max= 1152, per=4.22%, avg=892.40, stdev=133.32, samples=20 00:20:43.678 iops : min= 160, max= 288, avg=223.10, stdev=33.33, samples=20 00:20:43.678 lat (msec) : 50=19.65%, 100=70.99%, 250=9.36% 00:20:43.678 cpu : usr=38.67%, sys=1.60%, ctx=1059, majf=0, minf=9 00:20:43.678 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:43.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.678 complete : 0=0.0%, 4=87.5%, 8=12.1%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.678 issued rwts: total=2244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.678 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:43.678 filename2: (groupid=0, jobs=1): err= 0: pid=83633: Tue Nov 26 20:41:41 2024 00:20:43.678 read: IOPS=210, BW=843KiB/s (863kB/s)(8460KiB/10041msec) 00:20:43.678 slat (usec): min=4, max=8029, avg=30.96, stdev=321.97 00:20:43.678 clat (msec): min=13, max=156, avg=75.72, stdev=23.02 00:20:43.678 lat (msec): min=13, max=156, avg=75.76, stdev=23.03 00:20:43.678 clat percentiles (msec): 00:20:43.678 | 1.00th=[ 22], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 59], 00:20:43.678 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 82], 00:20:43.678 | 70.00th=[ 85], 80.00th=[ 92], 90.00th=[ 108], 95.00th=[ 121], 00:20:43.678 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 146], 00:20:43.678 | 99.99th=[ 157] 00:20:43.678 bw ( KiB/s): min= 568, max= 1424, per=3.96%, avg=839.60, stdev=174.95, samples=20 00:20:43.678 iops : min= 142, max= 356, avg=209.90, stdev=43.74, samples=20 00:20:43.678 lat (msec) : 20=0.85%, 50=13.71%, 100=72.53%, 250=12.91% 00:20:43.678 cpu : usr=31.41%, sys=1.42%, ctx=935, majf=0, minf=9 00:20:43.678 IO depths : 1=0.1%, 2=1.3%, 4=5.1%, 8=77.4%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:43.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.678 complete : 0=0.0%, 4=89.0%, 8=9.8%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.678 issued rwts: total=2115,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.678 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:43.678 filename2: (groupid=0, jobs=1): err= 0: pid=83634: Tue Nov 26 20:41:41 2024 00:20:43.678 read: IOPS=223, BW=893KiB/s (915kB/s)(8948KiB/10019msec) 00:20:43.678 slat (usec): min=4, max=8039, avg=36.52, stdev=335.51 00:20:43.678 clat (msec): min=17, max=135, avg=71.50, stdev=21.70 00:20:43.678 lat (msec): min=17, max=135, avg=71.54, stdev=21.70 00:20:43.678 clat percentiles (msec): 00:20:43.678 | 1.00th=[ 32], 5.00th=[ 38], 10.00th=[ 48], 20.00th=[ 51], 00:20:43.678 | 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 74], 00:20:43.678 | 70.00th=[ 83], 80.00th=[ 87], 90.00th=[ 100], 95.00th=[ 113], 00:20:43.678 | 99.00th=[ 131], 99.50th=[ 132], 99.90th=[ 136], 99.95th=[ 136], 00:20:43.678 | 99.99th=[ 136] 00:20:43.678 bw ( KiB/s): min= 648, max= 1136, per=4.20%, avg=889.60, stdev=127.07, samples=20 00:20:43.678 iops : min= 162, max= 284, avg=222.40, stdev=31.77, samples=20 00:20:43.678 lat (msec) : 20=0.18%, 50=19.67%, 100=70.94%, 250=9.21% 00:20:43.678 cpu : usr=35.80%, sys=1.26%, ctx=1052, majf=0, minf=9 00:20:43.678 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=81.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:43.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.678 complete : 0=0.0%, 4=87.5%, 8=12.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.678 issued rwts: total=2237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.678 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:43.678 00:20:43.678 Run status group 0 (all jobs): 00:20:43.678 READ: bw=20.7MiB/s (21.7MB/s), 829KiB/s-997KiB/s (848kB/s-1021kB/s), io=208MiB (218MB), run=10002-10063msec 00:20:43.678 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:43.678 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:43.678 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:43.678 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:43.678 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:43.678 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:43.678 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.678 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:43.678 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.678 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:43.678 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.678 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:43.678 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.678 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:43.678 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:43.678 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:43.678 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:43.678 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.678 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:43.678 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.678 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:43.678 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:43.679 bdev_null0 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:43.679 [2024-11-26 20:41:42.284114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:43.679 bdev_null1 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.679 { 00:20:43.679 "params": { 00:20:43.679 "name": "Nvme$subsystem", 00:20:43.679 "trtype": "$TEST_TRANSPORT", 00:20:43.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.679 "adrfam": "ipv4", 00:20:43.679 "trsvcid": "$NVMF_PORT", 00:20:43.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.679 "hdgst": ${hdgst:-false}, 00:20:43.679 "ddgst": ${ddgst:-false} 00:20:43.679 }, 00:20:43.679 "method": "bdev_nvme_attach_controller" 00:20:43.679 } 00:20:43.679 EOF 00:20:43.679 )") 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.679 { 00:20:43.679 "params": { 00:20:43.679 "name": "Nvme$subsystem", 00:20:43.679 "trtype": "$TEST_TRANSPORT", 00:20:43.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.679 "adrfam": "ipv4", 00:20:43.679 "trsvcid": "$NVMF_PORT", 00:20:43.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.679 "hdgst": ${hdgst:-false}, 00:20:43.679 "ddgst": ${ddgst:-false} 00:20:43.679 }, 00:20:43.679 "method": "bdev_nvme_attach_controller" 00:20:43.679 } 00:20:43.679 EOF 00:20:43.679 )") 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:43.679 20:41:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:43.679 "params": { 00:20:43.679 "name": "Nvme0", 00:20:43.679 "trtype": "tcp", 00:20:43.679 "traddr": "10.0.0.3", 00:20:43.679 "adrfam": "ipv4", 00:20:43.679 "trsvcid": "4420", 00:20:43.679 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:43.679 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:43.679 "hdgst": false, 00:20:43.679 "ddgst": false 00:20:43.679 }, 00:20:43.679 "method": "bdev_nvme_attach_controller" 00:20:43.679 },{ 00:20:43.679 "params": { 00:20:43.679 "name": "Nvme1", 00:20:43.679 "trtype": "tcp", 00:20:43.679 "traddr": "10.0.0.3", 00:20:43.679 "adrfam": "ipv4", 00:20:43.679 "trsvcid": "4420", 00:20:43.679 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:43.680 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:43.680 "hdgst": false, 00:20:43.680 "ddgst": false 00:20:43.680 }, 00:20:43.680 "method": "bdev_nvme_attach_controller" 00:20:43.680 }' 00:20:43.680 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:43.680 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:43.680 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:43.680 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:43.680 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:43.680 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:43.680 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:43.680 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:43.680 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:43.680 20:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:43.680 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:43.680 ... 00:20:43.680 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:43.680 ... 00:20:43.680 fio-3.35 00:20:43.680 Starting 4 threads 00:20:47.865 00:20:47.865 filename0: (groupid=0, jobs=1): err= 0: pid=83774: Tue Nov 26 20:41:48 2024 00:20:47.865 read: IOPS=1876, BW=14.7MiB/s (15.4MB/s)(73.3MiB/5001msec) 00:20:47.865 slat (nsec): min=5457, max=92714, avg=18389.58, stdev=10699.76 00:20:47.865 clat (usec): min=953, max=7197, avg=4210.67, stdev=1179.23 00:20:47.865 lat (usec): min=980, max=7221, avg=4229.06, stdev=1177.19 00:20:47.865 clat percentiles (usec): 00:20:47.865 | 1.00th=[ 1942], 5.00th=[ 2409], 10.00th=[ 2606], 20.00th=[ 2835], 00:20:47.865 | 30.00th=[ 3163], 40.00th=[ 4015], 50.00th=[ 4686], 60.00th=[ 4883], 00:20:47.865 | 70.00th=[ 5080], 80.00th=[ 5342], 90.00th=[ 5538], 95.00th=[ 5735], 00:20:47.865 | 99.00th=[ 6063], 99.50th=[ 6128], 99.90th=[ 6587], 99.95th=[ 6849], 00:20:47.865 | 99.99th=[ 7177] 00:20:47.865 bw ( KiB/s): min=11392, max=17795, per=23.58%, avg=14971.00, stdev=2153.15, samples=9 00:20:47.865 iops : min= 1424, max= 2224, avg=1871.33, stdev=269.08, samples=9 00:20:47.865 lat (usec) : 1000=0.02% 00:20:47.865 lat (msec) : 2=1.14%, 4=38.72%, 10=60.12% 00:20:47.865 cpu : usr=94.40%, sys=4.72%, ctx=7, majf=0, minf=0 00:20:47.865 IO depths : 1=0.1%, 2=6.8%, 4=59.9%, 8=33.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:47.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.865 complete : 0=0.0%, 4=97.4%, 8=2.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.865 issued rwts: total=9386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.865 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:47.865 filename0: (groupid=0, jobs=1): err= 0: pid=83775: Tue Nov 26 20:41:48 2024 00:20:47.865 read: IOPS=2003, BW=15.7MiB/s (16.4MB/s)(78.3MiB/5001msec) 00:20:47.865 slat (usec): min=4, max=103, avg=21.56, stdev=10.95 00:20:47.865 clat (usec): min=1419, max=7275, avg=3940.67, stdev=1178.54 00:20:47.865 lat (usec): min=1445, max=7288, avg=3962.23, stdev=1177.21 00:20:47.865 clat percentiles (usec): 00:20:47.865 | 1.00th=[ 2114], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2671], 00:20:47.865 | 30.00th=[ 2933], 40.00th=[ 3228], 50.00th=[ 4047], 60.00th=[ 4621], 00:20:47.865 | 70.00th=[ 4883], 80.00th=[ 5145], 90.00th=[ 5407], 95.00th=[ 5604], 00:20:47.865 | 99.00th=[ 5932], 99.50th=[ 6063], 99.90th=[ 6718], 99.95th=[ 6718], 00:20:47.865 | 99.99th=[ 6980] 00:20:47.865 bw ( KiB/s): min=13792, max=17952, per=25.55%, avg=16220.44, stdev=1173.43, samples=9 00:20:47.865 iops : min= 1724, max= 2244, avg=2027.56, stdev=146.68, samples=9 00:20:47.865 lat (msec) : 2=0.44%, 4=49.21%, 10=50.35% 00:20:47.865 cpu : usr=94.46%, sys=4.60%, ctx=7, majf=0, minf=0 00:20:47.865 IO depths : 1=0.1%, 2=2.2%, 4=62.5%, 8=35.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:47.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.865 complete : 0=0.0%, 4=99.2%, 8=0.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.865 issued rwts: total=10021,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.865 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:47.865 filename1: (groupid=0, jobs=1): err= 0: pid=83776: Tue Nov 26 20:41:48 2024 00:20:47.865 read: IOPS=2002, BW=15.6MiB/s (16.4MB/s)(78.2MiB/5002msec) 00:20:47.865 slat (usec): min=5, max=101, avg=21.37, stdev=10.72 00:20:47.865 clat (usec): min=1442, max=6778, avg=3943.41, stdev=1179.22 00:20:47.865 lat (usec): min=1456, max=6801, avg=3964.78, stdev=1177.50 00:20:47.865 clat percentiles (usec): 00:20:47.865 | 1.00th=[ 2114], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2671], 00:20:47.865 | 30.00th=[ 2933], 40.00th=[ 3228], 50.00th=[ 4080], 60.00th=[ 4621], 00:20:47.865 | 70.00th=[ 4883], 80.00th=[ 5145], 90.00th=[ 5407], 95.00th=[ 5604], 00:20:47.865 | 99.00th=[ 5932], 99.50th=[ 5997], 99.90th=[ 6259], 99.95th=[ 6456], 00:20:47.865 | 99.99th=[ 6718] 00:20:47.865 bw ( KiB/s): min=13696, max=17952, per=25.54%, avg=16218.67, stdev=1217.18, samples=9 00:20:47.866 iops : min= 1712, max= 2244, avg=2027.33, stdev=152.15, samples=9 00:20:47.866 lat (msec) : 2=0.41%, 4=49.20%, 10=50.39% 00:20:47.866 cpu : usr=94.20%, sys=4.88%, ctx=7, majf=0, minf=1 00:20:47.866 IO depths : 1=0.1%, 2=2.2%, 4=62.5%, 8=35.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:47.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.866 complete : 0=0.0%, 4=99.2%, 8=0.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.866 issued rwts: total=10016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.866 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:47.866 filename1: (groupid=0, jobs=1): err= 0: pid=83777: Tue Nov 26 20:41:48 2024 00:20:47.866 read: IOPS=2054, BW=16.1MiB/s (16.8MB/s)(80.3MiB/5001msec) 00:20:47.866 slat (usec): min=4, max=102, avg=20.38, stdev=10.89 00:20:47.866 clat (usec): min=1551, max=6780, avg=3845.54, stdev=1181.37 00:20:47.866 lat (usec): min=1559, max=6806, avg=3865.92, stdev=1180.49 00:20:47.866 clat percentiles (usec): 00:20:47.866 | 1.00th=[ 2057], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2638], 00:20:47.866 | 30.00th=[ 2900], 40.00th=[ 3097], 50.00th=[ 3556], 60.00th=[ 4555], 00:20:47.866 | 70.00th=[ 4817], 80.00th=[ 5080], 90.00th=[ 5407], 95.00th=[ 5604], 00:20:47.866 | 99.00th=[ 5932], 99.50th=[ 5997], 99.90th=[ 6259], 99.95th=[ 6521], 00:20:47.866 | 99.99th=[ 6652] 00:20:47.866 bw ( KiB/s): min=15680, max=18064, per=26.27%, avg=16682.67, stdev=938.29, samples=9 00:20:47.866 iops : min= 1960, max= 2258, avg=2085.33, stdev=117.29, samples=9 00:20:47.866 lat (msec) : 2=0.62%, 4=53.14%, 10=46.24% 00:20:47.866 cpu : usr=93.04%, sys=6.06%, ctx=9, majf=0, minf=0 00:20:47.866 IO depths : 1=0.1%, 2=0.4%, 4=63.5%, 8=36.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:47.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.866 complete : 0=0.0%, 4=99.8%, 8=0.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.866 issued rwts: total=10277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.866 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:47.866 00:20:47.866 Run status group 0 (all jobs): 00:20:47.866 READ: bw=62.0MiB/s (65.0MB/s), 14.7MiB/s-16.1MiB/s (15.4MB/s-16.8MB/s), io=310MiB (325MB), run=5001-5002msec 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.125 00:20:48.125 real 0m23.766s 00:20:48.125 user 2m4.949s 00:20:48.125 sys 0m7.300s 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:48.125 20:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.125 ************************************ 00:20:48.125 END TEST fio_dif_rand_params 00:20:48.125 ************************************ 00:20:48.385 20:41:48 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:48.385 20:41:48 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:48.385 20:41:48 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:48.385 20:41:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:48.385 ************************************ 00:20:48.385 START TEST fio_dif_digest 00:20:48.385 ************************************ 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:48.385 bdev_null0 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:48.385 [2024-11-26 20:41:48.555431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.385 { 00:20:48.385 "params": { 00:20:48.385 "name": "Nvme$subsystem", 00:20:48.385 "trtype": "$TEST_TRANSPORT", 00:20:48.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.385 "adrfam": "ipv4", 00:20:48.385 "trsvcid": "$NVMF_PORT", 00:20:48.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.385 "hdgst": ${hdgst:-false}, 00:20:48.385 "ddgst": ${ddgst:-false} 00:20:48.385 }, 00:20:48.385 "method": "bdev_nvme_attach_controller" 00:20:48.385 } 00:20:48.385 EOF 00:20:48.385 )") 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:48.385 "params": { 00:20:48.385 "name": "Nvme0", 00:20:48.385 "trtype": "tcp", 00:20:48.385 "traddr": "10.0.0.3", 00:20:48.385 "adrfam": "ipv4", 00:20:48.385 "trsvcid": "4420", 00:20:48.385 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:48.385 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:48.385 "hdgst": true, 00:20:48.385 "ddgst": true 00:20:48.385 }, 00:20:48.385 "method": "bdev_nvme_attach_controller" 00:20:48.385 }' 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:48.385 20:41:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:48.643 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:48.643 ... 00:20:48.643 fio-3.35 00:20:48.643 Starting 3 threads 00:21:00.841 00:21:00.841 filename0: (groupid=0, jobs=1): err= 0: pid=83883: Tue Nov 26 20:41:59 2024 00:21:00.841 read: IOPS=225, BW=28.2MiB/s (29.6MB/s)(282MiB/10005msec) 00:21:00.841 slat (nsec): min=6644, max=56634, avg=14408.69, stdev=4302.10 00:21:00.841 clat (usec): min=10071, max=20238, avg=13272.04, stdev=1024.67 00:21:00.841 lat (usec): min=10084, max=20252, avg=13286.45, stdev=1025.03 00:21:00.841 clat percentiles (usec): 00:21:00.841 | 1.00th=[11863], 5.00th=[11994], 10.00th=[12125], 20.00th=[12256], 00:21:00.841 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13304], 60.00th=[13435], 00:21:00.841 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14353], 95.00th=[14746], 00:21:00.841 | 99.00th=[16712], 99.50th=[17171], 99.90th=[20317], 99.95th=[20317], 00:21:00.841 | 99.99th=[20317] 00:21:00.841 bw ( KiB/s): min=26112, max=30720, per=33.32%, avg=28838.40, stdev=1585.23, samples=20 00:21:00.841 iops : min= 204, max= 240, avg=225.30, stdev=12.38, samples=20 00:21:00.841 lat (msec) : 20=99.87%, 50=0.13% 00:21:00.841 cpu : usr=91.57%, sys=7.92%, ctx=10, majf=0, minf=0 00:21:00.841 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:00.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.841 issued rwts: total=2256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.841 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:00.841 filename0: (groupid=0, jobs=1): err= 0: pid=83884: Tue Nov 26 20:41:59 2024 00:21:00.841 read: IOPS=225, BW=28.2MiB/s (29.5MB/s)(282MiB/10009msec) 00:21:00.841 slat (nsec): min=6844, max=51466, avg=11101.90, stdev=5235.40 00:21:00.841 clat (usec): min=11867, max=19898, avg=13280.55, stdev=1027.52 00:21:00.841 lat (usec): min=11874, max=19913, avg=13291.66, stdev=1028.28 00:21:00.841 clat percentiles (usec): 00:21:00.841 | 1.00th=[11994], 5.00th=[11994], 10.00th=[12125], 20.00th=[12256], 00:21:00.841 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13304], 60.00th=[13566], 00:21:00.841 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14353], 95.00th=[14746], 00:21:00.841 | 99.00th=[17171], 99.50th=[17957], 99.90th=[19792], 99.95th=[19792], 00:21:00.841 | 99.99th=[19792] 00:21:00.841 bw ( KiB/s): min=26112, max=31488, per=33.32%, avg=28838.40, stdev=1623.93, samples=20 00:21:00.841 iops : min= 204, max= 246, avg=225.30, stdev=12.69, samples=20 00:21:00.841 lat (msec) : 20=100.00% 00:21:00.841 cpu : usr=90.80%, sys=8.62%, ctx=18, majf=0, minf=0 00:21:00.841 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:00.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.841 issued rwts: total=2256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.841 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:00.841 filename0: (groupid=0, jobs=1): err= 0: pid=83885: Tue Nov 26 20:41:59 2024 00:21:00.841 read: IOPS=225, BW=28.2MiB/s (29.6MB/s)(282MiB/10005msec) 00:21:00.841 slat (nsec): min=5309, max=46116, avg=14857.08, stdev=4347.49 00:21:00.841 clat (usec): min=10078, max=20237, avg=13270.35, stdev=1023.79 00:21:00.841 lat (usec): min=10091, max=20252, avg=13285.21, stdev=1024.40 00:21:00.841 clat percentiles (usec): 00:21:00.841 | 1.00th=[11863], 5.00th=[11994], 10.00th=[12125], 20.00th=[12256], 00:21:00.841 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13304], 60.00th=[13435], 00:21:00.841 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14353], 95.00th=[14746], 00:21:00.841 | 99.00th=[16712], 99.50th=[17171], 99.90th=[20317], 99.95th=[20317], 00:21:00.841 | 99.99th=[20317] 00:21:00.841 bw ( KiB/s): min=26112, max=30720, per=33.32%, avg=28838.40, stdev=1585.23, samples=20 00:21:00.841 iops : min= 204, max= 240, avg=225.30, stdev=12.38, samples=20 00:21:00.841 lat (msec) : 20=99.87%, 50=0.13% 00:21:00.841 cpu : usr=90.86%, sys=8.37%, ctx=53, majf=0, minf=0 00:21:00.841 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:00.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.841 issued rwts: total=2256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.841 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:00.841 00:21:00.841 Run status group 0 (all jobs): 00:21:00.841 READ: bw=84.5MiB/s (88.6MB/s), 28.2MiB/s-28.2MiB/s (29.5MB/s-29.6MB/s), io=846MiB (887MB), run=10005-10009msec 00:21:00.841 20:41:59 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:00.841 20:41:59 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:21:00.841 20:41:59 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:21:00.841 20:41:59 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:00.841 20:41:59 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:21:00.841 20:41:59 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:00.841 20:41:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.841 20:41:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:00.841 20:41:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.841 20:41:59 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:00.841 20:41:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.841 20:41:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:00.841 ************************************ 00:21:00.841 END TEST fio_dif_digest 00:21:00.841 ************************************ 00:21:00.841 20:41:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.841 00:21:00.841 real 0m10.985s 00:21:00.841 user 0m27.957s 00:21:00.841 sys 0m2.751s 00:21:00.841 20:41:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.841 20:41:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:00.841 20:41:59 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:00.841 20:41:59 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:21:00.841 20:41:59 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:00.841 20:41:59 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:21:00.841 20:41:59 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:00.841 20:41:59 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:21:00.841 20:41:59 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:00.841 20:41:59 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:00.841 rmmod nvme_tcp 00:21:00.841 rmmod nvme_fabrics 00:21:00.841 rmmod nvme_keyring 00:21:00.841 20:41:59 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:00.841 20:41:59 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:21:00.841 20:41:59 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:21:00.841 20:41:59 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 83136 ']' 00:21:00.841 20:41:59 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 83136 00:21:00.841 20:41:59 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 83136 ']' 00:21:00.841 20:41:59 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 83136 00:21:00.841 20:41:59 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:21:00.841 20:41:59 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:00.841 20:41:59 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83136 00:21:00.841 killing process with pid 83136 00:21:00.841 20:41:59 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:00.841 20:41:59 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:00.841 20:41:59 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83136' 00:21:00.841 20:41:59 nvmf_dif -- common/autotest_common.sh@973 -- # kill 83136 00:21:00.841 20:41:59 nvmf_dif -- common/autotest_common.sh@978 -- # wait 83136 00:21:00.841 20:41:59 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:21:00.841 20:41:59 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:00.841 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:00.841 Waiting for block devices as requested 00:21:00.841 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:00.842 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:00.842 20:42:00 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:00.842 20:42:00 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:00.842 20:42:00 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:21:00.842 20:42:00 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:21:00.842 20:42:00 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:21:00.842 20:42:00 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:00.842 20:42:00 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:00.842 20:42:00 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:00.842 20:42:00 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:00.842 20:42:00 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:00.842 20:42:00 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:00.842 20:42:00 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:00.842 20:42:00 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:00.842 20:42:00 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:00.842 20:42:00 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:00.842 20:42:00 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:00.842 20:42:00 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:00.842 20:42:00 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:00.842 20:42:00 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:00.842 20:42:00 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:00.842 20:42:00 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:00.842 20:42:00 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:00.842 20:42:00 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.842 20:42:00 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:00.842 20:42:00 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.842 20:42:00 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:21:00.842 00:21:00.842 real 0m59.789s 00:21:00.842 user 3m49.265s 00:21:00.842 sys 0m18.499s 00:21:00.842 20:42:00 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.842 20:42:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:00.842 ************************************ 00:21:00.842 END TEST nvmf_dif 00:21:00.842 ************************************ 00:21:00.842 20:42:00 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:00.842 20:42:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:00.842 20:42:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.842 20:42:00 -- common/autotest_common.sh@10 -- # set +x 00:21:00.842 ************************************ 00:21:00.842 START TEST nvmf_abort_qd_sizes 00:21:00.842 ************************************ 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:00.842 * Looking for test storage... 00:21:00.842 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:00.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.842 --rc genhtml_branch_coverage=1 00:21:00.842 --rc genhtml_function_coverage=1 00:21:00.842 --rc genhtml_legend=1 00:21:00.842 --rc geninfo_all_blocks=1 00:21:00.842 --rc geninfo_unexecuted_blocks=1 00:21:00.842 00:21:00.842 ' 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:00.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.842 --rc genhtml_branch_coverage=1 00:21:00.842 --rc genhtml_function_coverage=1 00:21:00.842 --rc genhtml_legend=1 00:21:00.842 --rc geninfo_all_blocks=1 00:21:00.842 --rc geninfo_unexecuted_blocks=1 00:21:00.842 00:21:00.842 ' 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:00.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.842 --rc genhtml_branch_coverage=1 00:21:00.842 --rc genhtml_function_coverage=1 00:21:00.842 --rc genhtml_legend=1 00:21:00.842 --rc geninfo_all_blocks=1 00:21:00.842 --rc geninfo_unexecuted_blocks=1 00:21:00.842 00:21:00.842 ' 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:00.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.842 --rc genhtml_branch_coverage=1 00:21:00.842 --rc genhtml_function_coverage=1 00:21:00.842 --rc genhtml_legend=1 00:21:00.842 --rc geninfo_all_blocks=1 00:21:00.842 --rc geninfo_unexecuted_blocks=1 00:21:00.842 00:21:00.842 ' 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:00.842 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:00.842 Cannot find device "nvmf_init_br" 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:00.842 Cannot find device "nvmf_init_br2" 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:00.842 Cannot find device "nvmf_tgt_br" 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:21:00.842 20:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:00.842 Cannot find device "nvmf_tgt_br2" 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:00.842 Cannot find device "nvmf_init_br" 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:00.842 Cannot find device "nvmf_init_br2" 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:00.842 Cannot find device "nvmf_tgt_br" 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:00.842 Cannot find device "nvmf_tgt_br2" 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:00.842 Cannot find device "nvmf_br" 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:00.842 Cannot find device "nvmf_init_if" 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:00.842 Cannot find device "nvmf_init_if2" 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:00.842 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:00.842 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:00.842 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:01.100 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:01.100 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:01.100 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:01.100 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:01.100 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:01.100 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:01.100 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:01.100 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:01.100 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:01.100 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:01.100 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:01.101 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:01.101 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:01.101 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:01.101 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:01.101 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:01.101 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:01.101 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:01.101 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:01.101 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:01.101 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:01.101 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:01.101 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:01.101 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:01.101 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:01.101 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:01.101 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:01.101 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:21:01.101 00:21:01.101 --- 10.0.0.3 ping statistics --- 00:21:01.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.101 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:21:01.101 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:01.101 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:01.101 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:21:01.101 00:21:01.101 --- 10.0.0.4 ping statistics --- 00:21:01.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.101 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:21:01.101 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:01.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:01.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:21:01.101 00:21:01.101 --- 10.0.0.1 ping statistics --- 00:21:01.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.101 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:21:01.101 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:01.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:01.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:21:01.101 00:21:01.101 --- 10.0.0.2 ping statistics --- 00:21:01.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.101 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:21:01.101 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:01.101 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:21:01.101 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:21:01.101 20:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:01.668 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:01.927 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:01.927 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:01.927 20:42:02 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:01.927 20:42:02 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:01.927 20:42:02 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:01.927 20:42:02 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:01.927 20:42:02 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:01.927 20:42:02 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:01.927 20:42:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:01.927 20:42:02 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:01.927 20:42:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:01.927 20:42:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:01.927 20:42:02 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84533 00:21:01.927 20:42:02 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:01.927 20:42:02 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84533 00:21:01.927 20:42:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84533 ']' 00:21:01.927 20:42:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.927 20:42:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:01.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.927 20:42:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.927 20:42:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:01.927 20:42:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:01.927 [2024-11-26 20:42:02.277887] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:21:01.927 [2024-11-26 20:42:02.278014] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.186 [2024-11-26 20:42:02.433691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:02.186 [2024-11-26 20:42:02.506945] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.186 [2024-11-26 20:42:02.507017] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.186 [2024-11-26 20:42:02.507041] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.186 [2024-11-26 20:42:02.507051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.186 [2024-11-26 20:42:02.507060] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.186 [2024-11-26 20:42:02.508467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.186 [2024-11-26 20:42:02.508697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.186 [2024-11-26 20:42:02.508540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.186 [2024-11-26 20:42:02.508690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:02.446 [2024-11-26 20:42:02.569747] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:02.446 20:42:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:02.446 ************************************ 00:21:02.446 START TEST spdk_target_abort 00:21:02.446 ************************************ 00:21:02.446 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:21:02.446 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:02.446 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:02.446 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.446 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:02.706 spdk_targetn1 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:02.706 [2024-11-26 20:42:02.809804] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:02.706 [2024-11-26 20:42:02.848706] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:02.706 20:42:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:05.998 Initializing NVMe Controllers 00:21:05.998 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:05.998 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:05.998 Initialization complete. Launching workers. 00:21:05.998 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9859, failed: 0 00:21:05.998 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1054, failed to submit 8805 00:21:05.998 success 757, unsuccessful 297, failed 0 00:21:05.998 20:42:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:05.998 20:42:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:09.300 Initializing NVMe Controllers 00:21:09.300 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:09.300 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:09.300 Initialization complete. Launching workers. 00:21:09.300 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8966, failed: 0 00:21:09.300 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1159, failed to submit 7807 00:21:09.300 success 392, unsuccessful 767, failed 0 00:21:09.300 20:42:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:09.300 20:42:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:12.583 Initializing NVMe Controllers 00:21:12.583 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:12.583 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:12.583 Initialization complete. Launching workers. 00:21:12.583 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31621, failed: 0 00:21:12.583 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2352, failed to submit 29269 00:21:12.583 success 463, unsuccessful 1889, failed 0 00:21:12.583 20:42:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:12.583 20:42:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.583 20:42:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:12.583 20:42:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.583 20:42:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:12.583 20:42:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.583 20:42:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:13.150 20:42:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.150 20:42:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84533 00:21:13.150 20:42:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84533 ']' 00:21:13.150 20:42:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84533 00:21:13.150 20:42:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:21:13.150 20:42:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.150 20:42:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84533 00:21:13.150 20:42:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:13.150 killing process with pid 84533 00:21:13.150 20:42:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:13.150 20:42:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84533' 00:21:13.150 20:42:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84533 00:21:13.150 20:42:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84533 00:21:13.408 00:21:13.408 real 0m10.815s 00:21:13.408 user 0m41.659s 00:21:13.408 sys 0m2.139s 00:21:13.408 20:42:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:13.408 20:42:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:13.408 ************************************ 00:21:13.408 END TEST spdk_target_abort 00:21:13.408 ************************************ 00:21:13.408 20:42:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:13.409 20:42:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:13.409 20:42:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:13.409 20:42:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:13.409 ************************************ 00:21:13.409 START TEST kernel_target_abort 00:21:13.409 ************************************ 00:21:13.409 20:42:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:21:13.409 20:42:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:13.409 20:42:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:21:13.409 20:42:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:13.409 20:42:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:13.409 20:42:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.409 20:42:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.409 20:42:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:13.409 20:42:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.409 20:42:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:13.409 20:42:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:13.409 20:42:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:13.409 20:42:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:13.409 20:42:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:13.409 20:42:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:21:13.409 20:42:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:13.409 20:42:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:13.409 20:42:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:13.409 20:42:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:21:13.409 20:42:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:21:13.409 20:42:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:21:13.409 20:42:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:13.409 20:42:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:13.667 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:13.667 Waiting for block devices as requested 00:21:13.927 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:13.927 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:13.927 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:13.927 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:13.927 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:21:13.927 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:21:13.927 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:13.927 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:13.927 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:21:13.927 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:13.927 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:13.927 No valid GPT data, bailing 00:21:13.927 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:13.927 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:13.927 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:13.927 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:21:13.927 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:13.927 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:13.927 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:21:13.927 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:21:13.927 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:13.927 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:13.927 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:21:13.927 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:13.927 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:14.186 No valid GPT data, bailing 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:14.186 No valid GPT data, bailing 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:14.186 No valid GPT data, bailing 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 --hostid=310b31eb-b117-4685-b95a-c58b48fd3835 -a 10.0.0.1 -t tcp -s 4420 00:21:14.186 00:21:14.186 Discovery Log Number of Records 2, Generation counter 2 00:21:14.186 =====Discovery Log Entry 0====== 00:21:14.186 trtype: tcp 00:21:14.186 adrfam: ipv4 00:21:14.186 subtype: current discovery subsystem 00:21:14.186 treq: not specified, sq flow control disable supported 00:21:14.186 portid: 1 00:21:14.186 trsvcid: 4420 00:21:14.186 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:14.186 traddr: 10.0.0.1 00:21:14.186 eflags: none 00:21:14.186 sectype: none 00:21:14.186 =====Discovery Log Entry 1====== 00:21:14.186 trtype: tcp 00:21:14.186 adrfam: ipv4 00:21:14.186 subtype: nvme subsystem 00:21:14.186 treq: not specified, sq flow control disable supported 00:21:14.186 portid: 1 00:21:14.186 trsvcid: 4420 00:21:14.186 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:14.186 traddr: 10.0.0.1 00:21:14.186 eflags: none 00:21:14.186 sectype: none 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:14.186 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:14.187 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:14.187 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:14.187 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:14.187 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:14.187 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:14.187 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:14.187 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:14.187 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:14.187 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:14.187 20:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:17.472 Initializing NVMe Controllers 00:21:17.472 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:17.472 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:17.473 Initialization complete. Launching workers. 00:21:17.473 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33674, failed: 0 00:21:17.473 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33674, failed to submit 0 00:21:17.473 success 0, unsuccessful 33674, failed 0 00:21:17.473 20:42:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:17.473 20:42:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:20.757 Initializing NVMe Controllers 00:21:20.757 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:20.757 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:20.757 Initialization complete. Launching workers. 00:21:20.757 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69043, failed: 0 00:21:20.757 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30347, failed to submit 38696 00:21:20.757 success 0, unsuccessful 30347, failed 0 00:21:20.757 20:42:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:20.757 20:42:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:24.045 Initializing NVMe Controllers 00:21:24.045 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:24.045 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:24.045 Initialization complete. Launching workers. 00:21:24.045 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 82877, failed: 0 00:21:24.045 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20690, failed to submit 62187 00:21:24.045 success 0, unsuccessful 20690, failed 0 00:21:24.045 20:42:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:24.045 20:42:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:24.045 20:42:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:21:24.045 20:42:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:24.045 20:42:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:24.045 20:42:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:24.045 20:42:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:24.045 20:42:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:21:24.045 20:42:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:21:24.045 20:42:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:24.614 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:27.144 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:27.144 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:27.144 00:21:27.144 real 0m13.460s 00:21:27.144 user 0m6.399s 00:21:27.144 sys 0m4.498s 00:21:27.144 ************************************ 00:21:27.144 END TEST kernel_target_abort 00:21:27.144 ************************************ 00:21:27.144 20:42:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:27.144 20:42:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:27.144 20:42:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:27.144 20:42:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:27.144 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:27.144 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:21:27.144 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:27.144 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:21:27.144 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:27.144 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:27.144 rmmod nvme_tcp 00:21:27.144 rmmod nvme_fabrics 00:21:27.144 rmmod nvme_keyring 00:21:27.144 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:27.144 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:21:27.144 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:21:27.144 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84533 ']' 00:21:27.144 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84533 00:21:27.144 20:42:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84533 ']' 00:21:27.144 20:42:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84533 00:21:27.144 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84533) - No such process 00:21:27.144 Process with pid 84533 is not found 00:21:27.144 20:42:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84533 is not found' 00:21:27.144 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:21:27.144 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:27.402 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:27.402 Waiting for block devices as requested 00:21:27.402 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:27.402 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:27.660 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:27.660 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:27.660 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:21:27.660 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:21:27.660 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:27.660 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:21:27.660 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:27.660 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:27.660 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:27.660 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:27.660 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:27.660 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:27.660 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:27.660 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:27.660 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:27.660 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:27.660 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:27.660 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:27.660 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:27.660 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:27.660 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:27.660 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:27.660 20:42:27 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.660 20:42:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:27.660 20:42:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.660 20:42:28 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:21:27.660 ************************************ 00:21:27.660 END TEST nvmf_abort_qd_sizes 00:21:27.660 ************************************ 00:21:27.660 00:21:27.660 real 0m27.280s 00:21:27.660 user 0m49.187s 00:21:27.660 sys 0m8.068s 00:21:27.660 20:42:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:27.660 20:42:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:27.919 20:42:28 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:27.919 20:42:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:27.919 20:42:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:27.919 20:42:28 -- common/autotest_common.sh@10 -- # set +x 00:21:27.919 ************************************ 00:21:27.919 START TEST keyring_file 00:21:27.919 ************************************ 00:21:27.919 20:42:28 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:27.919 * Looking for test storage... 00:21:27.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:27.919 20:42:28 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:27.919 20:42:28 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:21:27.919 20:42:28 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:27.919 20:42:28 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@345 -- # : 1 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@353 -- # local d=1 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@355 -- # echo 1 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@353 -- # local d=2 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@355 -- # echo 2 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@368 -- # return 0 00:21:27.919 20:42:28 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:27.919 20:42:28 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:27.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.919 --rc genhtml_branch_coverage=1 00:21:27.919 --rc genhtml_function_coverage=1 00:21:27.919 --rc genhtml_legend=1 00:21:27.919 --rc geninfo_all_blocks=1 00:21:27.919 --rc geninfo_unexecuted_blocks=1 00:21:27.919 00:21:27.919 ' 00:21:27.919 20:42:28 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:27.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.919 --rc genhtml_branch_coverage=1 00:21:27.919 --rc genhtml_function_coverage=1 00:21:27.919 --rc genhtml_legend=1 00:21:27.919 --rc geninfo_all_blocks=1 00:21:27.919 --rc geninfo_unexecuted_blocks=1 00:21:27.919 00:21:27.919 ' 00:21:27.919 20:42:28 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:27.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.919 --rc genhtml_branch_coverage=1 00:21:27.919 --rc genhtml_function_coverage=1 00:21:27.919 --rc genhtml_legend=1 00:21:27.919 --rc geninfo_all_blocks=1 00:21:27.919 --rc geninfo_unexecuted_blocks=1 00:21:27.919 00:21:27.919 ' 00:21:27.919 20:42:28 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:27.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.919 --rc genhtml_branch_coverage=1 00:21:27.919 --rc genhtml_function_coverage=1 00:21:27.919 --rc genhtml_legend=1 00:21:27.919 --rc geninfo_all_blocks=1 00:21:27.919 --rc geninfo_unexecuted_blocks=1 00:21:27.919 00:21:27.919 ' 00:21:27.919 20:42:28 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:27.919 20:42:28 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:27.919 20:42:28 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:27.919 20:42:28 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.919 20:42:28 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.919 20:42:28 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.919 20:42:28 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.919 20:42:28 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.919 20:42:28 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.919 20:42:28 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.919 20:42:28 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.919 20:42:28 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.919 20:42:28 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.919 20:42:28 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:21:27.919 20:42:28 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:21:27.919 20:42:28 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.919 20:42:28 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.919 20:42:28 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:27.919 20:42:28 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.919 20:42:28 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.919 20:42:28 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.919 20:42:28 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.919 20:42:28 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.919 20:42:28 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.919 20:42:28 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:27.919 20:42:28 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.919 20:42:28 keyring_file -- nvmf/common.sh@51 -- # : 0 00:21:27.919 20:42:28 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:27.919 20:42:28 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:27.919 20:42:28 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:28.178 20:42:28 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.178 20:42:28 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.178 20:42:28 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:28.178 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:28.178 20:42:28 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:28.178 20:42:28 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:28.178 20:42:28 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:28.178 20:42:28 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:28.178 20:42:28 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:28.178 20:42:28 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:28.178 20:42:28 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:28.178 20:42:28 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:28.178 20:42:28 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:28.178 20:42:28 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:28.178 20:42:28 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:28.178 20:42:28 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:28.178 20:42:28 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:28.178 20:42:28 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:28.178 20:42:28 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:28.178 20:42:28 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.PwLw5MYQHh 00:21:28.178 20:42:28 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:28.178 20:42:28 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:28.178 20:42:28 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:28.178 20:42:28 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:28.178 20:42:28 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:28.178 20:42:28 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:28.178 20:42:28 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:28.178 20:42:28 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.PwLw5MYQHh 00:21:28.178 20:42:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.PwLw5MYQHh 00:21:28.178 20:42:28 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.PwLw5MYQHh 00:21:28.178 20:42:28 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:28.178 20:42:28 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:28.178 20:42:28 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:28.178 20:42:28 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:28.178 20:42:28 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:28.178 20:42:28 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:28.178 20:42:28 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BsSArz36W6 00:21:28.178 20:42:28 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:28.178 20:42:28 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:28.178 20:42:28 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:28.178 20:42:28 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:28.178 20:42:28 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:21:28.178 20:42:28 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:28.178 20:42:28 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:28.178 20:42:28 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BsSArz36W6 00:21:28.178 20:42:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BsSArz36W6 00:21:28.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.178 20:42:28 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.BsSArz36W6 00:21:28.178 20:42:28 keyring_file -- keyring/file.sh@30 -- # tgtpid=85441 00:21:28.178 20:42:28 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:28.178 20:42:28 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85441 00:21:28.178 20:42:28 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85441 ']' 00:21:28.178 20:42:28 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.178 20:42:28 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:28.178 20:42:28 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.178 20:42:28 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:28.178 20:42:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:28.178 [2024-11-26 20:42:28.467137] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:21:28.178 [2024-11-26 20:42:28.467466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85441 ] 00:21:28.437 [2024-11-26 20:42:28.619913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.437 [2024-11-26 20:42:28.677635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.437 [2024-11-26 20:42:28.753995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:28.697 20:42:28 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:28.697 20:42:28 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:21:28.697 20:42:28 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:28.697 20:42:28 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.697 20:42:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:28.697 [2024-11-26 20:42:28.961955] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.697 null0 00:21:28.697 [2024-11-26 20:42:28.993937] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:28.697 [2024-11-26 20:42:28.994123] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:28.697 20:42:29 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.697 20:42:29 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:28.697 20:42:29 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:28.697 20:42:29 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:28.697 20:42:29 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:28.697 20:42:29 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:28.697 20:42:29 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:28.697 20:42:29 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:28.697 20:42:29 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:28.697 20:42:29 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.697 20:42:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:28.697 [2024-11-26 20:42:29.021916] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:28.697 request: 00:21:28.697 { 00:21:28.697 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:28.697 "secure_channel": false, 00:21:28.697 "listen_address": { 00:21:28.697 "trtype": "tcp", 00:21:28.697 "traddr": "127.0.0.1", 00:21:28.697 "trsvcid": "4420" 00:21:28.697 }, 00:21:28.697 "method": "nvmf_subsystem_add_listener", 00:21:28.697 "req_id": 1 00:21:28.697 } 00:21:28.697 Got JSON-RPC error response 00:21:28.697 response: 00:21:28.697 { 00:21:28.697 "code": -32602, 00:21:28.697 "message": "Invalid parameters" 00:21:28.697 } 00:21:28.697 20:42:29 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:28.697 20:42:29 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:28.697 20:42:29 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:28.697 20:42:29 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:28.697 20:42:29 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:28.697 20:42:29 keyring_file -- keyring/file.sh@47 -- # bperfpid=85452 00:21:28.697 20:42:29 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:28.697 20:42:29 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85452 /var/tmp/bperf.sock 00:21:28.697 20:42:29 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85452 ']' 00:21:28.697 20:42:29 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:28.697 20:42:29 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:28.697 20:42:29 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:28.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:28.698 20:42:29 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:28.698 20:42:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:28.957 [2024-11-26 20:42:29.090007] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:21:28.957 [2024-11-26 20:42:29.090349] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85452 ] 00:21:28.957 [2024-11-26 20:42:29.244326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.957 [2024-11-26 20:42:29.306883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.215 [2024-11-26 20:42:29.367915] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:29.843 20:42:30 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:29.843 20:42:30 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:21:29.843 20:42:30 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PwLw5MYQHh 00:21:29.843 20:42:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PwLw5MYQHh 00:21:30.102 20:42:30 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BsSArz36W6 00:21:30.102 20:42:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BsSArz36W6 00:21:30.360 20:42:30 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:21:30.360 20:42:30 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:30.360 20:42:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:30.360 20:42:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:30.360 20:42:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:30.619 20:42:30 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.PwLw5MYQHh == \/\t\m\p\/\t\m\p\.\P\w\L\w\5\M\Y\Q\H\h ]] 00:21:30.619 20:42:30 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:21:30.619 20:42:30 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:21:30.619 20:42:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:30.619 20:42:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:30.619 20:42:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:30.877 20:42:31 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.BsSArz36W6 == \/\t\m\p\/\t\m\p\.\B\s\S\A\r\z\3\6\W\6 ]] 00:21:30.877 20:42:31 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:21:30.877 20:42:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:30.877 20:42:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:30.877 20:42:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:30.877 20:42:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:30.877 20:42:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:31.445 20:42:31 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:31.445 20:42:31 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:21:31.445 20:42:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:31.445 20:42:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:31.445 20:42:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:31.445 20:42:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:31.445 20:42:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:31.445 20:42:31 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:21:31.445 20:42:31 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:31.445 20:42:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:31.704 [2024-11-26 20:42:32.018379] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:31.963 nvme0n1 00:21:31.963 20:42:32 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:21:31.963 20:42:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:31.963 20:42:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:31.963 20:42:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:31.963 20:42:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:31.963 20:42:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:32.222 20:42:32 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:21:32.222 20:42:32 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:21:32.222 20:42:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:32.222 20:42:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:32.222 20:42:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:32.222 20:42:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:32.222 20:42:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:32.480 20:42:32 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:21:32.480 20:42:32 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:32.480 Running I/O for 1 seconds... 00:21:33.416 13439.00 IOPS, 52.50 MiB/s 00:21:33.416 Latency(us) 00:21:33.416 [2024-11-26T20:42:33.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.416 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:33.416 nvme0n1 : 1.01 13480.45 52.66 0.00 0.00 9469.14 4289.63 17158.52 00:21:33.416 [2024-11-26T20:42:33.771Z] =================================================================================================================== 00:21:33.416 [2024-11-26T20:42:33.771Z] Total : 13480.45 52.66 0.00 0.00 9469.14 4289.63 17158.52 00:21:33.416 { 00:21:33.416 "results": [ 00:21:33.416 { 00:21:33.416 "job": "nvme0n1", 00:21:33.416 "core_mask": "0x2", 00:21:33.416 "workload": "randrw", 00:21:33.416 "percentage": 50, 00:21:33.416 "status": "finished", 00:21:33.416 "queue_depth": 128, 00:21:33.416 "io_size": 4096, 00:21:33.416 "runtime": 1.006569, 00:21:33.416 "iops": 13480.446944024701, 00:21:33.416 "mibps": 52.65799587509649, 00:21:33.416 "io_failed": 0, 00:21:33.416 "io_timeout": 0, 00:21:33.416 "avg_latency_us": 9469.137295037486, 00:21:33.416 "min_latency_us": 4289.629090909091, 00:21:33.416 "max_latency_us": 17158.516363636365 00:21:33.416 } 00:21:33.416 ], 00:21:33.416 "core_count": 1 00:21:33.416 } 00:21:33.416 20:42:33 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:33.416 20:42:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:33.674 20:42:33 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:21:33.674 20:42:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:33.674 20:42:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:33.674 20:42:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:33.674 20:42:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:33.674 20:42:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:33.933 20:42:34 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:33.933 20:42:34 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:21:33.933 20:42:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:33.933 20:42:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:33.933 20:42:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:33.933 20:42:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:33.933 20:42:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:34.192 20:42:34 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:21:34.192 20:42:34 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:34.192 20:42:34 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:34.192 20:42:34 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:34.192 20:42:34 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:34.192 20:42:34 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.192 20:42:34 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:34.192 20:42:34 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.192 20:42:34 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:34.192 20:42:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:34.451 [2024-11-26 20:42:34.678413] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:34.451 [2024-11-26 20:42:34.679087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21755d0 (107): Transport endpoint is not connected 00:21:34.451 [2024-11-26 20:42:34.680088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21755d0 (9): Bad file descriptor 00:21:34.451 [2024-11-26 20:42:34.681085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:21:34.451 [2024-11-26 20:42:34.681107] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:34.451 [2024-11-26 20:42:34.681133] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:34.451 [2024-11-26 20:42:34.681144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:21:34.451 request: 00:21:34.451 { 00:21:34.451 "name": "nvme0", 00:21:34.451 "trtype": "tcp", 00:21:34.451 "traddr": "127.0.0.1", 00:21:34.451 "adrfam": "ipv4", 00:21:34.451 "trsvcid": "4420", 00:21:34.451 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:34.451 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:34.451 "prchk_reftag": false, 00:21:34.451 "prchk_guard": false, 00:21:34.451 "hdgst": false, 00:21:34.451 "ddgst": false, 00:21:34.451 "psk": "key1", 00:21:34.451 "allow_unrecognized_csi": false, 00:21:34.451 "method": "bdev_nvme_attach_controller", 00:21:34.451 "req_id": 1 00:21:34.451 } 00:21:34.451 Got JSON-RPC error response 00:21:34.451 response: 00:21:34.451 { 00:21:34.451 "code": -5, 00:21:34.451 "message": "Input/output error" 00:21:34.451 } 00:21:34.451 20:42:34 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:34.451 20:42:34 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:34.451 20:42:34 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:34.451 20:42:34 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:34.451 20:42:34 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:21:34.451 20:42:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:34.451 20:42:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:34.451 20:42:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:34.451 20:42:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:34.451 20:42:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:34.711 20:42:34 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:34.711 20:42:34 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:21:34.711 20:42:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:34.711 20:42:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:34.711 20:42:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:34.711 20:42:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:34.711 20:42:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:34.970 20:42:35 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:21:34.970 20:42:35 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:21:34.970 20:42:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:35.229 20:42:35 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:21:35.229 20:42:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:35.796 20:42:35 keyring_file -- keyring/file.sh@78 -- # jq length 00:21:35.796 20:42:35 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:21:35.796 20:42:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:36.055 20:42:36 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:21:36.055 20:42:36 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.PwLw5MYQHh 00:21:36.055 20:42:36 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.PwLw5MYQHh 00:21:36.055 20:42:36 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:36.055 20:42:36 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.PwLw5MYQHh 00:21:36.055 20:42:36 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:36.055 20:42:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.055 20:42:36 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:36.055 20:42:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.055 20:42:36 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PwLw5MYQHh 00:21:36.055 20:42:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PwLw5MYQHh 00:21:36.313 [2024-11-26 20:42:36.459712] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.PwLw5MYQHh': 0100660 00:21:36.313 [2024-11-26 20:42:36.459760] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:36.313 request: 00:21:36.313 { 00:21:36.313 "name": "key0", 00:21:36.313 "path": "/tmp/tmp.PwLw5MYQHh", 00:21:36.313 "method": "keyring_file_add_key", 00:21:36.313 "req_id": 1 00:21:36.313 } 00:21:36.313 Got JSON-RPC error response 00:21:36.313 response: 00:21:36.313 { 00:21:36.313 "code": -1, 00:21:36.313 "message": "Operation not permitted" 00:21:36.313 } 00:21:36.313 20:42:36 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:36.313 20:42:36 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:36.313 20:42:36 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:36.313 20:42:36 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:36.313 20:42:36 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.PwLw5MYQHh 00:21:36.313 20:42:36 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PwLw5MYQHh 00:21:36.313 20:42:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PwLw5MYQHh 00:21:36.573 20:42:36 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.PwLw5MYQHh 00:21:36.573 20:42:36 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:21:36.573 20:42:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:36.573 20:42:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:36.573 20:42:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:36.573 20:42:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:36.573 20:42:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:36.832 20:42:37 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:21:36.832 20:42:37 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:36.832 20:42:37 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:36.832 20:42:37 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:36.832 20:42:37 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:36.832 20:42:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.832 20:42:37 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:36.832 20:42:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.832 20:42:37 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:36.832 20:42:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:37.089 [2024-11-26 20:42:37.358504] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.PwLw5MYQHh': No such file or directory 00:21:37.089 [2024-11-26 20:42:37.358553] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:37.089 [2024-11-26 20:42:37.358592] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:37.089 [2024-11-26 20:42:37.358602] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:21:37.089 [2024-11-26 20:42:37.358629] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:37.089 [2024-11-26 20:42:37.358653] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:37.089 request: 00:21:37.089 { 00:21:37.089 "name": "nvme0", 00:21:37.089 "trtype": "tcp", 00:21:37.089 "traddr": "127.0.0.1", 00:21:37.089 "adrfam": "ipv4", 00:21:37.089 "trsvcid": "4420", 00:21:37.089 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:37.089 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:37.089 "prchk_reftag": false, 00:21:37.089 "prchk_guard": false, 00:21:37.089 "hdgst": false, 00:21:37.089 "ddgst": false, 00:21:37.089 "psk": "key0", 00:21:37.089 "allow_unrecognized_csi": false, 00:21:37.089 "method": "bdev_nvme_attach_controller", 00:21:37.089 "req_id": 1 00:21:37.089 } 00:21:37.089 Got JSON-RPC error response 00:21:37.089 response: 00:21:37.089 { 00:21:37.089 "code": -19, 00:21:37.089 "message": "No such device" 00:21:37.089 } 00:21:37.089 20:42:37 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:37.089 20:42:37 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:37.089 20:42:37 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:37.089 20:42:37 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:37.089 20:42:37 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:21:37.089 20:42:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:37.347 20:42:37 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:37.347 20:42:37 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:37.347 20:42:37 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:37.347 20:42:37 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:37.347 20:42:37 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:37.347 20:42:37 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:37.347 20:42:37 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.KnD0kK7IE8 00:21:37.347 20:42:37 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:37.347 20:42:37 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:37.347 20:42:37 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:37.347 20:42:37 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:37.347 20:42:37 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:37.347 20:42:37 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:37.347 20:42:37 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:37.347 20:42:37 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.KnD0kK7IE8 00:21:37.347 20:42:37 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.KnD0kK7IE8 00:21:37.347 20:42:37 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.KnD0kK7IE8 00:21:37.347 20:42:37 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.KnD0kK7IE8 00:21:37.347 20:42:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.KnD0kK7IE8 00:21:37.915 20:42:37 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:37.915 20:42:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:38.174 nvme0n1 00:21:38.174 20:42:38 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:21:38.174 20:42:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:38.174 20:42:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:38.174 20:42:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:38.174 20:42:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:38.174 20:42:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:38.433 20:42:38 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:21:38.433 20:42:38 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:21:38.433 20:42:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:38.693 20:42:38 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:21:38.693 20:42:38 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:21:38.693 20:42:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:38.693 20:42:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:38.693 20:42:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:38.956 20:42:39 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:21:38.956 20:42:39 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:21:38.956 20:42:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:38.956 20:42:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:38.956 20:42:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:38.956 20:42:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:38.956 20:42:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:39.218 20:42:39 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:21:39.218 20:42:39 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:39.218 20:42:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:39.477 20:42:39 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:21:39.477 20:42:39 keyring_file -- keyring/file.sh@105 -- # jq length 00:21:39.477 20:42:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:39.735 20:42:39 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:21:39.735 20:42:39 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.KnD0kK7IE8 00:21:39.735 20:42:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.KnD0kK7IE8 00:21:39.993 20:42:40 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BsSArz36W6 00:21:39.993 20:42:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BsSArz36W6 00:21:40.252 20:42:40 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:40.252 20:42:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:40.511 nvme0n1 00:21:40.511 20:42:40 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:21:40.511 20:42:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:41.080 20:42:41 keyring_file -- keyring/file.sh@113 -- # config='{ 00:21:41.080 "subsystems": [ 00:21:41.080 { 00:21:41.080 "subsystem": "keyring", 00:21:41.080 "config": [ 00:21:41.080 { 00:21:41.080 "method": "keyring_file_add_key", 00:21:41.080 "params": { 00:21:41.080 "name": "key0", 00:21:41.080 "path": "/tmp/tmp.KnD0kK7IE8" 00:21:41.080 } 00:21:41.080 }, 00:21:41.080 { 00:21:41.080 "method": "keyring_file_add_key", 00:21:41.080 "params": { 00:21:41.080 "name": "key1", 00:21:41.080 "path": "/tmp/tmp.BsSArz36W6" 00:21:41.080 } 00:21:41.080 } 00:21:41.080 ] 00:21:41.080 }, 00:21:41.080 { 00:21:41.080 "subsystem": "iobuf", 00:21:41.080 "config": [ 00:21:41.080 { 00:21:41.080 "method": "iobuf_set_options", 00:21:41.080 "params": { 00:21:41.080 "small_pool_count": 8192, 00:21:41.080 "large_pool_count": 1024, 00:21:41.080 "small_bufsize": 8192, 00:21:41.080 "large_bufsize": 135168, 00:21:41.080 "enable_numa": false 00:21:41.080 } 00:21:41.080 } 00:21:41.080 ] 00:21:41.080 }, 00:21:41.080 { 00:21:41.080 "subsystem": "sock", 00:21:41.080 "config": [ 00:21:41.080 { 00:21:41.080 "method": "sock_set_default_impl", 00:21:41.080 "params": { 00:21:41.080 "impl_name": "uring" 00:21:41.080 } 00:21:41.080 }, 00:21:41.080 { 00:21:41.080 "method": "sock_impl_set_options", 00:21:41.080 "params": { 00:21:41.080 "impl_name": "ssl", 00:21:41.080 "recv_buf_size": 4096, 00:21:41.080 "send_buf_size": 4096, 00:21:41.080 "enable_recv_pipe": true, 00:21:41.080 "enable_quickack": false, 00:21:41.080 "enable_placement_id": 0, 00:21:41.080 "enable_zerocopy_send_server": true, 00:21:41.080 "enable_zerocopy_send_client": false, 00:21:41.080 "zerocopy_threshold": 0, 00:21:41.080 "tls_version": 0, 00:21:41.080 "enable_ktls": false 00:21:41.080 } 00:21:41.080 }, 00:21:41.080 { 00:21:41.080 "method": "sock_impl_set_options", 00:21:41.080 "params": { 00:21:41.080 "impl_name": "posix", 00:21:41.080 "recv_buf_size": 2097152, 00:21:41.080 "send_buf_size": 2097152, 00:21:41.080 "enable_recv_pipe": true, 00:21:41.080 "enable_quickack": false, 00:21:41.080 "enable_placement_id": 0, 00:21:41.080 "enable_zerocopy_send_server": true, 00:21:41.080 "enable_zerocopy_send_client": false, 00:21:41.080 "zerocopy_threshold": 0, 00:21:41.080 "tls_version": 0, 00:21:41.080 "enable_ktls": false 00:21:41.080 } 00:21:41.080 }, 00:21:41.080 { 00:21:41.080 "method": "sock_impl_set_options", 00:21:41.080 "params": { 00:21:41.080 "impl_name": "uring", 00:21:41.080 "recv_buf_size": 2097152, 00:21:41.080 "send_buf_size": 2097152, 00:21:41.080 "enable_recv_pipe": true, 00:21:41.080 "enable_quickack": false, 00:21:41.080 "enable_placement_id": 0, 00:21:41.081 "enable_zerocopy_send_server": false, 00:21:41.081 "enable_zerocopy_send_client": false, 00:21:41.081 "zerocopy_threshold": 0, 00:21:41.081 "tls_version": 0, 00:21:41.081 "enable_ktls": false 00:21:41.081 } 00:21:41.081 } 00:21:41.081 ] 00:21:41.081 }, 00:21:41.081 { 00:21:41.081 "subsystem": "vmd", 00:21:41.081 "config": [] 00:21:41.081 }, 00:21:41.081 { 00:21:41.081 "subsystem": "accel", 00:21:41.081 "config": [ 00:21:41.081 { 00:21:41.081 "method": "accel_set_options", 00:21:41.081 "params": { 00:21:41.081 "small_cache_size": 128, 00:21:41.081 "large_cache_size": 16, 00:21:41.081 "task_count": 2048, 00:21:41.081 "sequence_count": 2048, 00:21:41.081 "buf_count": 2048 00:21:41.081 } 00:21:41.081 } 00:21:41.081 ] 00:21:41.081 }, 00:21:41.081 { 00:21:41.081 "subsystem": "bdev", 00:21:41.081 "config": [ 00:21:41.081 { 00:21:41.081 "method": "bdev_set_options", 00:21:41.081 "params": { 00:21:41.081 "bdev_io_pool_size": 65535, 00:21:41.081 "bdev_io_cache_size": 256, 00:21:41.081 "bdev_auto_examine": true, 00:21:41.081 "iobuf_small_cache_size": 128, 00:21:41.081 "iobuf_large_cache_size": 16 00:21:41.081 } 00:21:41.081 }, 00:21:41.081 { 00:21:41.081 "method": "bdev_raid_set_options", 00:21:41.081 "params": { 00:21:41.081 "process_window_size_kb": 1024, 00:21:41.081 "process_max_bandwidth_mb_sec": 0 00:21:41.081 } 00:21:41.081 }, 00:21:41.081 { 00:21:41.081 "method": "bdev_iscsi_set_options", 00:21:41.081 "params": { 00:21:41.081 "timeout_sec": 30 00:21:41.081 } 00:21:41.081 }, 00:21:41.081 { 00:21:41.081 "method": "bdev_nvme_set_options", 00:21:41.081 "params": { 00:21:41.081 "action_on_timeout": "none", 00:21:41.081 "timeout_us": 0, 00:21:41.081 "timeout_admin_us": 0, 00:21:41.081 "keep_alive_timeout_ms": 10000, 00:21:41.081 "arbitration_burst": 0, 00:21:41.081 "low_priority_weight": 0, 00:21:41.081 "medium_priority_weight": 0, 00:21:41.081 "high_priority_weight": 0, 00:21:41.081 "nvme_adminq_poll_period_us": 10000, 00:21:41.081 "nvme_ioq_poll_period_us": 0, 00:21:41.081 "io_queue_requests": 512, 00:21:41.081 "delay_cmd_submit": true, 00:21:41.081 "transport_retry_count": 4, 00:21:41.081 "bdev_retry_count": 3, 00:21:41.081 "transport_ack_timeout": 0, 00:21:41.081 "ctrlr_loss_timeout_sec": 0, 00:21:41.081 "reconnect_delay_sec": 0, 00:21:41.081 "fast_io_fail_timeout_sec": 0, 00:21:41.081 "disable_auto_failback": false, 00:21:41.081 "generate_uuids": false, 00:21:41.081 "transport_tos": 0, 00:21:41.081 "nvme_error_stat": false, 00:21:41.081 "rdma_srq_size": 0, 00:21:41.081 "io_path_stat": false, 00:21:41.081 "allow_accel_sequence": false, 00:21:41.081 "rdma_max_cq_size": 0, 00:21:41.081 "rdma_cm_event_timeout_ms": 0, 00:21:41.081 "dhchap_digests": [ 00:21:41.081 "sha256", 00:21:41.081 "sha384", 00:21:41.081 "sha512" 00:21:41.081 ], 00:21:41.081 "dhchap_dhgroups": [ 00:21:41.081 "null", 00:21:41.081 "ffdhe2048", 00:21:41.081 "ffdhe3072", 00:21:41.081 "ffdhe4096", 00:21:41.081 "ffdhe6144", 00:21:41.081 "ffdhe8192" 00:21:41.081 ] 00:21:41.081 } 00:21:41.081 }, 00:21:41.081 { 00:21:41.081 "method": "bdev_nvme_attach_controller", 00:21:41.081 "params": { 00:21:41.081 "name": "nvme0", 00:21:41.081 "trtype": "TCP", 00:21:41.081 "adrfam": "IPv4", 00:21:41.081 "traddr": "127.0.0.1", 00:21:41.081 "trsvcid": "4420", 00:21:41.081 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:41.081 "prchk_reftag": false, 00:21:41.081 "prchk_guard": false, 00:21:41.081 "ctrlr_loss_timeout_sec": 0, 00:21:41.081 "reconnect_delay_sec": 0, 00:21:41.081 "fast_io_fail_timeout_sec": 0, 00:21:41.081 "psk": "key0", 00:21:41.081 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:41.081 "hdgst": false, 00:21:41.081 "ddgst": false, 00:21:41.081 "multipath": "multipath" 00:21:41.081 } 00:21:41.081 }, 00:21:41.081 { 00:21:41.081 "method": "bdev_nvme_set_hotplug", 00:21:41.081 "params": { 00:21:41.081 "period_us": 100000, 00:21:41.081 "enable": false 00:21:41.081 } 00:21:41.081 }, 00:21:41.081 { 00:21:41.081 "method": "bdev_wait_for_examine" 00:21:41.081 } 00:21:41.081 ] 00:21:41.081 }, 00:21:41.081 { 00:21:41.081 "subsystem": "nbd", 00:21:41.081 "config": [] 00:21:41.081 } 00:21:41.081 ] 00:21:41.081 }' 00:21:41.081 20:42:41 keyring_file -- keyring/file.sh@115 -- # killprocess 85452 00:21:41.081 20:42:41 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85452 ']' 00:21:41.081 20:42:41 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85452 00:21:41.081 20:42:41 keyring_file -- common/autotest_common.sh@959 -- # uname 00:21:41.081 20:42:41 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.081 20:42:41 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85452 00:21:41.081 killing process with pid 85452 00:21:41.081 Received shutdown signal, test time was about 1.000000 seconds 00:21:41.081 00:21:41.081 Latency(us) 00:21:41.081 [2024-11-26T20:42:41.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.081 [2024-11-26T20:42:41.436Z] =================================================================================================================== 00:21:41.081 [2024-11-26T20:42:41.436Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:41.081 20:42:41 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:41.081 20:42:41 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:41.081 20:42:41 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85452' 00:21:41.081 20:42:41 keyring_file -- common/autotest_common.sh@973 -- # kill 85452 00:21:41.081 20:42:41 keyring_file -- common/autotest_common.sh@978 -- # wait 85452 00:21:41.081 20:42:41 keyring_file -- keyring/file.sh@118 -- # bperfpid=85708 00:21:41.081 20:42:41 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85708 /var/tmp/bperf.sock 00:21:41.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:41.081 20:42:41 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:41.081 20:42:41 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85708 ']' 00:21:41.081 20:42:41 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:41.081 20:42:41 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:21:41.081 "subsystems": [ 00:21:41.081 { 00:21:41.081 "subsystem": "keyring", 00:21:41.081 "config": [ 00:21:41.081 { 00:21:41.081 "method": "keyring_file_add_key", 00:21:41.081 "params": { 00:21:41.081 "name": "key0", 00:21:41.081 "path": "/tmp/tmp.KnD0kK7IE8" 00:21:41.081 } 00:21:41.081 }, 00:21:41.081 { 00:21:41.081 "method": "keyring_file_add_key", 00:21:41.081 "params": { 00:21:41.081 "name": "key1", 00:21:41.081 "path": "/tmp/tmp.BsSArz36W6" 00:21:41.081 } 00:21:41.081 } 00:21:41.081 ] 00:21:41.081 }, 00:21:41.081 { 00:21:41.081 "subsystem": "iobuf", 00:21:41.081 "config": [ 00:21:41.081 { 00:21:41.081 "method": "iobuf_set_options", 00:21:41.081 "params": { 00:21:41.081 "small_pool_count": 8192, 00:21:41.081 "large_pool_count": 1024, 00:21:41.081 "small_bufsize": 8192, 00:21:41.081 "large_bufsize": 135168, 00:21:41.081 "enable_numa": false 00:21:41.081 } 00:21:41.081 } 00:21:41.081 ] 00:21:41.081 }, 00:21:41.081 { 00:21:41.081 "subsystem": "sock", 00:21:41.081 "config": [ 00:21:41.081 { 00:21:41.081 "method": "sock_set_default_impl", 00:21:41.081 "params": { 00:21:41.081 "impl_name": "uring" 00:21:41.081 } 00:21:41.081 }, 00:21:41.081 { 00:21:41.081 "method": "sock_impl_set_options", 00:21:41.081 "params": { 00:21:41.081 "impl_name": "ssl", 00:21:41.081 "recv_buf_size": 4096, 00:21:41.081 "send_buf_size": 4096, 00:21:41.081 "enable_recv_pipe": true, 00:21:41.081 "enable_quickack": false, 00:21:41.081 "enable_placement_id": 0, 00:21:41.081 "enable_zerocopy_send_server": true, 00:21:41.081 "enable_zerocopy_send_client": false, 00:21:41.081 "zerocopy_threshold": 0, 00:21:41.081 "tls_version": 0, 00:21:41.081 "enable_ktls": false 00:21:41.081 } 00:21:41.081 }, 00:21:41.081 { 00:21:41.081 "method": "sock_impl_set_options", 00:21:41.082 "params": { 00:21:41.082 "impl_name": "posix", 00:21:41.082 "recv_buf_size": 2097152, 00:21:41.082 "send_buf_size": 2097152, 00:21:41.082 "enable_recv_pipe": true, 00:21:41.082 "enable_quickack": false, 00:21:41.082 "enable_placement_id": 0, 00:21:41.082 "enable_zerocopy_send_server": true, 00:21:41.082 "enable_zerocopy_send_client": false, 00:21:41.082 "zerocopy_threshold": 0, 00:21:41.082 "tls_version": 0, 00:21:41.082 "enable_ktls": false 00:21:41.082 } 00:21:41.082 }, 00:21:41.082 { 00:21:41.082 "method": "sock_impl_set_options", 00:21:41.082 "params": { 00:21:41.082 "impl_name": "uring", 00:21:41.082 "recv_buf_size": 2097152, 00:21:41.082 "send_buf_size": 2097152, 00:21:41.082 "enable_recv_pipe": true, 00:21:41.082 "enable_quickack": false, 00:21:41.082 "enable_placement_id": 0, 00:21:41.082 "enable_zerocopy_send_server": false, 00:21:41.082 "enable_zerocopy_send_client": false, 00:21:41.082 "zerocopy_threshold": 0, 00:21:41.082 "tls_version": 0, 00:21:41.082 "enable_ktls": false 00:21:41.082 } 00:21:41.082 } 00:21:41.082 ] 00:21:41.082 }, 00:21:41.082 { 00:21:41.082 "subsystem": "vmd", 00:21:41.082 "config": [] 00:21:41.082 }, 00:21:41.082 { 00:21:41.082 "subsystem": "accel", 00:21:41.082 "config": [ 00:21:41.082 { 00:21:41.082 "method": "accel_set_options", 00:21:41.082 "params": { 00:21:41.082 "small_cache_size": 128, 00:21:41.082 "large_cache_size": 16, 00:21:41.082 "task_count": 2048, 00:21:41.082 "sequence_count": 2048, 00:21:41.082 "buf_count": 2048 00:21:41.082 } 00:21:41.082 } 00:21:41.082 ] 00:21:41.082 }, 00:21:41.082 { 00:21:41.082 "subsystem": "bdev", 00:21:41.082 "config": [ 00:21:41.082 { 00:21:41.082 "method": "bdev_set_options", 00:21:41.082 "params": { 00:21:41.082 "bdev_io_pool_size": 65535, 00:21:41.082 "bdev_io_cache_size": 256, 00:21:41.082 "bdev_auto_examine": true, 00:21:41.082 "iobuf_small_cache_size": 128, 00:21:41.082 "iobuf_large_cache_size": 16 00:21:41.082 } 00:21:41.082 }, 00:21:41.082 { 00:21:41.082 "method": "bdev_raid_set_options", 00:21:41.082 "params": { 00:21:41.082 "process_window_size_kb": 1024, 00:21:41.082 "process_max_bandwidth_mb_sec": 0 00:21:41.082 } 00:21:41.082 }, 00:21:41.082 { 00:21:41.082 "method": "bdev_iscsi_set_options", 00:21:41.082 "params": { 00:21:41.082 "timeout_sec": 30 00:21:41.082 } 00:21:41.082 }, 00:21:41.082 { 00:21:41.082 "method": "bdev_nvme_set_options", 00:21:41.082 "params": { 00:21:41.082 "action_on_timeout": "none", 00:21:41.082 "timeout_us": 0, 00:21:41.082 "timeout_admin_us": 0, 00:21:41.082 "keep_alive_timeout_ms": 10000, 00:21:41.082 "arbitration_burst": 0, 00:21:41.082 "low_priority_weight": 0, 00:21:41.082 "medium_priority_weight": 0, 00:21:41.082 "high_priority_weight": 0, 00:21:41.082 "nvme_adminq_poll_period_us": 10000, 00:21:41.082 "nvme_ioq_poll_period_us": 0, 00:21:41.082 "io_queue_requests": 512, 00:21:41.082 "delay_cmd_submit": true, 00:21:41.082 "transport_retry_count": 4, 00:21:41.082 "bdev_retry_count": 3, 00:21:41.082 "transport_ack_timeout": 0, 00:21:41.082 "ctrlr_loss_timeout_sec": 0, 00:21:41.082 "reconnect_delay_sec": 0, 00:21:41.082 "fast_io_fail_timeout_sec": 0, 00:21:41.082 "disable_auto_failback": false, 00:21:41.082 "generate_uuids": false, 00:21:41.082 "transport_tos": 0, 00:21:41.082 "nvme_error_stat": false, 00:21:41.082 "rdma_srq_size": 0, 00:21:41.082 "io_path_stat": false, 00:21:41.082 "allow_accel_sequence": false, 00:21:41.082 "rdma_max_cq_size": 0, 00:21:41.082 "rdma_cm_event_timeout_ms": 0, 00:21:41.082 "dhchap_digests": [ 00:21:41.082 "sha256", 00:21:41.082 "sha384", 00:21:41.082 "sha512" 00:21:41.082 ], 00:21:41.082 "dhchap_dhgroups": [ 00:21:41.082 "null", 00:21:41.082 "ffdhe2048", 00:21:41.082 "ffdhe3072", 00:21:41.082 "ffdhe4096", 00:21:41.082 "ffdhe6144", 00:21:41.082 "ffdhe8192" 00:21:41.082 ] 00:21:41.082 } 00:21:41.082 }, 00:21:41.082 { 00:21:41.082 "method": "bdev_nvme_attach_controller", 00:21:41.082 "params": { 00:21:41.082 "name": "nvme0", 00:21:41.082 "trtype": "TCP", 00:21:41.082 "adrfam": "IPv4", 00:21:41.082 "traddr": "127.0.0.1", 00:21:41.082 "trsvcid": "4420", 00:21:41.082 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:41.082 "prchk_reftag": false, 00:21:41.082 "prchk_guard": false, 00:21:41.082 "ctrlr_loss_timeout_sec": 0, 00:21:41.082 "reconnect_delay_sec": 0, 00:21:41.082 "fast_io_fail_timeout_sec": 0, 00:21:41.082 "psk": "key0", 00:21:41.082 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:41.082 "hdgst": false, 00:21:41.082 "ddgst": false, 00:21:41.082 "multipath": "multipath" 00:21:41.082 } 00:21:41.082 }, 00:21:41.082 { 00:21:41.082 "method": "bdev_nvme_set_hotplug", 00:21:41.082 "params": { 00:21:41.082 "period_us": 100000, 00:21:41.082 "enable": false 00:21:41.082 } 00:21:41.082 }, 00:21:41.082 { 00:21:41.082 "method": "bdev_wait_for_examine" 00:21:41.082 } 00:21:41.082 ] 00:21:41.082 }, 00:21:41.082 { 00:21:41.082 "subsystem": "nbd", 00:21:41.082 "config": [] 00:21:41.082 } 00:21:41.082 ] 00:21:41.082 }' 00:21:41.082 20:42:41 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.082 20:42:41 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:41.082 20:42:41 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.082 20:42:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:41.341 [2024-11-26 20:42:41.446258] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:21:41.341 [2024-11-26 20:42:41.446569] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85708 ] 00:21:41.341 [2024-11-26 20:42:41.592898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.341 [2024-11-26 20:42:41.640839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.599 [2024-11-26 20:42:41.777516] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:41.599 [2024-11-26 20:42:41.835193] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:42.166 20:42:42 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:42.166 20:42:42 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:21:42.166 20:42:42 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:21:42.166 20:42:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:42.166 20:42:42 keyring_file -- keyring/file.sh@121 -- # jq length 00:21:42.425 20:42:42 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:42.425 20:42:42 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:21:42.425 20:42:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:42.425 20:42:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:42.425 20:42:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:42.425 20:42:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:42.425 20:42:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:42.686 20:42:42 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:21:42.686 20:42:42 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:21:42.686 20:42:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:42.686 20:42:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:42.686 20:42:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:42.686 20:42:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:42.686 20:42:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:42.948 20:42:43 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:21:42.948 20:42:43 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:21:42.948 20:42:43 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:21:42.948 20:42:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:43.207 20:42:43 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:21:43.207 20:42:43 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:43.207 20:42:43 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.KnD0kK7IE8 /tmp/tmp.BsSArz36W6 00:21:43.207 20:42:43 keyring_file -- keyring/file.sh@20 -- # killprocess 85708 00:21:43.207 20:42:43 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85708 ']' 00:21:43.207 20:42:43 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85708 00:21:43.207 20:42:43 keyring_file -- common/autotest_common.sh@959 -- # uname 00:21:43.207 20:42:43 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:43.207 20:42:43 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85708 00:21:43.207 killing process with pid 85708 00:21:43.207 Received shutdown signal, test time was about 1.000000 seconds 00:21:43.207 00:21:43.207 Latency(us) 00:21:43.207 [2024-11-26T20:42:43.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.207 [2024-11-26T20:42:43.562Z] =================================================================================================================== 00:21:43.207 [2024-11-26T20:42:43.562Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:43.207 20:42:43 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:43.207 20:42:43 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:43.207 20:42:43 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85708' 00:21:43.207 20:42:43 keyring_file -- common/autotest_common.sh@973 -- # kill 85708 00:21:43.207 20:42:43 keyring_file -- common/autotest_common.sh@978 -- # wait 85708 00:21:43.465 20:42:43 keyring_file -- keyring/file.sh@21 -- # killprocess 85441 00:21:43.465 20:42:43 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85441 ']' 00:21:43.465 20:42:43 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85441 00:21:43.465 20:42:43 keyring_file -- common/autotest_common.sh@959 -- # uname 00:21:43.465 20:42:43 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:43.465 20:42:43 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85441 00:21:43.465 killing process with pid 85441 00:21:43.465 20:42:43 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:43.465 20:42:43 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:43.465 20:42:43 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85441' 00:21:43.465 20:42:43 keyring_file -- common/autotest_common.sh@973 -- # kill 85441 00:21:43.465 20:42:43 keyring_file -- common/autotest_common.sh@978 -- # wait 85441 00:21:44.034 ************************************ 00:21:44.034 END TEST keyring_file 00:21:44.034 ************************************ 00:21:44.034 00:21:44.034 real 0m16.029s 00:21:44.034 user 0m40.695s 00:21:44.034 sys 0m3.022s 00:21:44.034 20:42:44 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:44.034 20:42:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:44.034 20:42:44 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:21:44.034 20:42:44 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:44.034 20:42:44 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:44.034 20:42:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:44.034 20:42:44 -- common/autotest_common.sh@10 -- # set +x 00:21:44.034 ************************************ 00:21:44.034 START TEST keyring_linux 00:21:44.034 ************************************ 00:21:44.034 20:42:44 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:44.034 Joined session keyring: 784167974 00:21:44.034 * Looking for test storage... 00:21:44.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:44.034 20:42:44 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:44.034 20:42:44 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:44.034 20:42:44 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:21:44.034 20:42:44 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@345 -- # : 1 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:44.034 20:42:44 keyring_linux -- scripts/common.sh@368 -- # return 0 00:21:44.034 20:42:44 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:44.034 20:42:44 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:44.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.034 --rc genhtml_branch_coverage=1 00:21:44.034 --rc genhtml_function_coverage=1 00:21:44.034 --rc genhtml_legend=1 00:21:44.034 --rc geninfo_all_blocks=1 00:21:44.034 --rc geninfo_unexecuted_blocks=1 00:21:44.034 00:21:44.034 ' 00:21:44.034 20:42:44 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:44.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.034 --rc genhtml_branch_coverage=1 00:21:44.034 --rc genhtml_function_coverage=1 00:21:44.034 --rc genhtml_legend=1 00:21:44.034 --rc geninfo_all_blocks=1 00:21:44.034 --rc geninfo_unexecuted_blocks=1 00:21:44.034 00:21:44.034 ' 00:21:44.034 20:42:44 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:44.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.034 --rc genhtml_branch_coverage=1 00:21:44.034 --rc genhtml_function_coverage=1 00:21:44.034 --rc genhtml_legend=1 00:21:44.034 --rc geninfo_all_blocks=1 00:21:44.034 --rc geninfo_unexecuted_blocks=1 00:21:44.034 00:21:44.034 ' 00:21:44.034 20:42:44 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:44.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.034 --rc genhtml_branch_coverage=1 00:21:44.034 --rc genhtml_function_coverage=1 00:21:44.034 --rc genhtml_legend=1 00:21:44.035 --rc geninfo_all_blocks=1 00:21:44.035 --rc geninfo_unexecuted_blocks=1 00:21:44.035 00:21:44.035 ' 00:21:44.035 20:42:44 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:44.035 20:42:44 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:310b31eb-b117-4685-b95a-c58b48fd3835 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=310b31eb-b117-4685-b95a-c58b48fd3835 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:44.035 20:42:44 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:21:44.035 20:42:44 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.035 20:42:44 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.035 20:42:44 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.035 20:42:44 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.035 20:42:44 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.035 20:42:44 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.035 20:42:44 keyring_linux -- paths/export.sh@5 -- # export PATH 00:21:44.035 20:42:44 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:44.035 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:44.035 20:42:44 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:44.035 20:42:44 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:44.035 20:42:44 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:44.035 20:42:44 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:21:44.035 20:42:44 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:21:44.035 20:42:44 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:21:44.035 20:42:44 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:21:44.035 20:42:44 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:44.035 20:42:44 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:21:44.035 20:42:44 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:44.035 20:42:44 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:44.035 20:42:44 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:21:44.035 20:42:44 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@733 -- # python - 00:21:44.035 20:42:44 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:21:44.035 /tmp/:spdk-test:key0 00:21:44.035 20:42:44 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:21:44.035 20:42:44 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:21:44.035 20:42:44 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:44.035 20:42:44 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:21:44.035 20:42:44 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:44.035 20:42:44 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:44.035 20:42:44 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:21:44.035 20:42:44 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:21:44.035 20:42:44 keyring_linux -- nvmf/common.sh@733 -- # python - 00:21:44.294 20:42:44 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:21:44.294 /tmp/:spdk-test:key1 00:21:44.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.294 20:42:44 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:21:44.294 20:42:44 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85834 00:21:44.294 20:42:44 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:44.294 20:42:44 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85834 00:21:44.294 20:42:44 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85834 ']' 00:21:44.294 20:42:44 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.294 20:42:44 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.294 20:42:44 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.294 20:42:44 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.294 20:42:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:44.294 [2024-11-26 20:42:44.493516] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:21:44.294 [2024-11-26 20:42:44.493992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85834 ] 00:21:44.294 [2024-11-26 20:42:44.636419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.553 [2024-11-26 20:42:44.685030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.554 [2024-11-26 20:42:44.749084] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:45.121 20:42:45 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.121 20:42:45 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:21:45.121 20:42:45 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:21:45.121 20:42:45 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.121 20:42:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:45.121 [2024-11-26 20:42:45.436768] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.121 null0 00:21:45.121 [2024-11-26 20:42:45.468752] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:45.121 [2024-11-26 20:42:45.468907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:45.380 20:42:45 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.380 20:42:45 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:21:45.380 480027783 00:21:45.380 20:42:45 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:21:45.380 921188212 00:21:45.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:45.380 20:42:45 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85852 00:21:45.380 20:42:45 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:21:45.380 20:42:45 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85852 /var/tmp/bperf.sock 00:21:45.380 20:42:45 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85852 ']' 00:21:45.380 20:42:45 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:45.380 20:42:45 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:45.380 20:42:45 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:45.380 20:42:45 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:45.380 20:42:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:45.380 [2024-11-26 20:42:45.551130] Starting SPDK v25.01-pre git sha1 5ca6db5da / DPDK 24.03.0 initialization... 00:21:45.380 [2024-11-26 20:42:45.551607] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85852 ] 00:21:45.380 [2024-11-26 20:42:45.696857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.639 [2024-11-26 20:42:45.743570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.639 20:42:45 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.639 20:42:45 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:21:45.639 20:42:45 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:21:45.639 20:42:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:21:45.897 20:42:46 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:21:45.897 20:42:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:46.157 [2024-11-26 20:42:46.330266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:46.157 20:42:46 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:46.157 20:42:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:46.415 [2024-11-26 20:42:46.647264] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:46.415 nvme0n1 00:21:46.415 20:42:46 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:21:46.415 20:42:46 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:21:46.415 20:42:46 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:46.415 20:42:46 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:46.415 20:42:46 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:46.415 20:42:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:46.674 20:42:47 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:21:46.674 20:42:47 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:46.674 20:42:47 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:21:46.674 20:42:47 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:21:46.674 20:42:47 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:46.674 20:42:47 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:46.674 20:42:47 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:21:46.932 20:42:47 keyring_linux -- keyring/linux.sh@25 -- # sn=480027783 00:21:46.932 20:42:47 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:21:46.932 20:42:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:46.932 20:42:47 keyring_linux -- keyring/linux.sh@26 -- # [[ 480027783 == \4\8\0\0\2\7\7\8\3 ]] 00:21:46.932 20:42:47 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 480027783 00:21:46.932 20:42:47 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:21:46.932 20:42:47 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:47.190 Running I/O for 1 seconds... 00:21:48.124 14934.00 IOPS, 58.34 MiB/s 00:21:48.124 Latency(us) 00:21:48.124 [2024-11-26T20:42:48.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.125 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:48.125 nvme0n1 : 1.01 14929.83 58.32 0.00 0.00 8532.23 2785.28 11439.01 00:21:48.125 [2024-11-26T20:42:48.480Z] =================================================================================================================== 00:21:48.125 [2024-11-26T20:42:48.480Z] Total : 14929.83 58.32 0.00 0.00 8532.23 2785.28 11439.01 00:21:48.125 { 00:21:48.125 "results": [ 00:21:48.125 { 00:21:48.125 "job": "nvme0n1", 00:21:48.125 "core_mask": "0x2", 00:21:48.125 "workload": "randread", 00:21:48.125 "status": "finished", 00:21:48.125 "queue_depth": 128, 00:21:48.125 "io_size": 4096, 00:21:48.125 "runtime": 1.00892, 00:21:48.125 "iops": 14929.825952503666, 00:21:48.125 "mibps": 58.31963262696745, 00:21:48.125 "io_failed": 0, 00:21:48.125 "io_timeout": 0, 00:21:48.125 "avg_latency_us": 8532.230061137163, 00:21:48.125 "min_latency_us": 2785.28, 00:21:48.125 "max_latency_us": 11439.01090909091 00:21:48.125 } 00:21:48.125 ], 00:21:48.125 "core_count": 1 00:21:48.125 } 00:21:48.125 20:42:48 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:48.125 20:42:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:48.383 20:42:48 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:21:48.383 20:42:48 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:21:48.383 20:42:48 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:48.383 20:42:48 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:48.383 20:42:48 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:48.383 20:42:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:48.641 20:42:48 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:21:48.641 20:42:48 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:48.641 20:42:48 keyring_linux -- keyring/linux.sh@23 -- # return 00:21:48.641 20:42:48 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:48.641 20:42:48 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:21:48.641 20:42:48 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:48.641 20:42:48 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:48.641 20:42:48 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:48.641 20:42:48 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:48.641 20:42:48 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:48.641 20:42:48 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:48.641 20:42:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:48.900 [2024-11-26 20:42:49.252554] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:48.900 [2024-11-26 20:42:49.253136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66c5d0 (107): Transport endpoint is not connected 00:21:49.158 [2024-11-26 20:42:49.254142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66c5d0 (9): Bad file descriptor 00:21:49.158 [2024-11-26 20:42:49.255122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:21:49.158 [2024-11-26 20:42:49.255146] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:49.158 [2024-11-26 20:42:49.255157] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:49.158 [2024-11-26 20:42:49.255167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:21:49.158 request: 00:21:49.158 { 00:21:49.158 "name": "nvme0", 00:21:49.158 "trtype": "tcp", 00:21:49.158 "traddr": "127.0.0.1", 00:21:49.158 "adrfam": "ipv4", 00:21:49.158 "trsvcid": "4420", 00:21:49.158 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:49.158 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:49.158 "prchk_reftag": false, 00:21:49.158 "prchk_guard": false, 00:21:49.158 "hdgst": false, 00:21:49.158 "ddgst": false, 00:21:49.158 "psk": ":spdk-test:key1", 00:21:49.158 "allow_unrecognized_csi": false, 00:21:49.158 "method": "bdev_nvme_attach_controller", 00:21:49.158 "req_id": 1 00:21:49.158 } 00:21:49.158 Got JSON-RPC error response 00:21:49.158 response: 00:21:49.158 { 00:21:49.158 "code": -5, 00:21:49.158 "message": "Input/output error" 00:21:49.158 } 00:21:49.158 20:42:49 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:21:49.158 20:42:49 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:49.158 20:42:49 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:49.158 20:42:49 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:49.158 20:42:49 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:21:49.158 20:42:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:49.158 20:42:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:21:49.158 20:42:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:21:49.158 20:42:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:21:49.158 20:42:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:49.159 20:42:49 keyring_linux -- keyring/linux.sh@33 -- # sn=480027783 00:21:49.159 20:42:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 480027783 00:21:49.159 1 links removed 00:21:49.159 20:42:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:49.159 20:42:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:21:49.159 20:42:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:21:49.159 20:42:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:21:49.159 20:42:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:21:49.159 20:42:49 keyring_linux -- keyring/linux.sh@33 -- # sn=921188212 00:21:49.159 20:42:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 921188212 00:21:49.159 1 links removed 00:21:49.159 20:42:49 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85852 00:21:49.159 20:42:49 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85852 ']' 00:21:49.159 20:42:49 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85852 00:21:49.159 20:42:49 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:21:49.159 20:42:49 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:49.159 20:42:49 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85852 00:21:49.159 killing process with pid 85852 00:21:49.159 Received shutdown signal, test time was about 1.000000 seconds 00:21:49.159 00:21:49.159 Latency(us) 00:21:49.159 [2024-11-26T20:42:49.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.159 [2024-11-26T20:42:49.514Z] =================================================================================================================== 00:21:49.159 [2024-11-26T20:42:49.514Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:49.159 20:42:49 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:49.159 20:42:49 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:49.159 20:42:49 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85852' 00:21:49.159 20:42:49 keyring_linux -- common/autotest_common.sh@973 -- # kill 85852 00:21:49.159 20:42:49 keyring_linux -- common/autotest_common.sh@978 -- # wait 85852 00:21:49.417 20:42:49 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85834 00:21:49.417 20:42:49 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85834 ']' 00:21:49.417 20:42:49 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85834 00:21:49.417 20:42:49 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:21:49.417 20:42:49 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:49.417 20:42:49 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85834 00:21:49.417 killing process with pid 85834 00:21:49.417 20:42:49 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:49.417 20:42:49 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:49.417 20:42:49 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85834' 00:21:49.417 20:42:49 keyring_linux -- common/autotest_common.sh@973 -- # kill 85834 00:21:49.417 20:42:49 keyring_linux -- common/autotest_common.sh@978 -- # wait 85834 00:21:49.675 00:21:49.675 real 0m5.798s 00:21:49.675 user 0m11.089s 00:21:49.675 sys 0m1.555s 00:21:49.675 20:42:49 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:49.675 ************************************ 00:21:49.675 END TEST keyring_linux 00:21:49.675 ************************************ 00:21:49.675 20:42:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:49.675 20:42:49 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:49.675 20:42:49 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:49.675 20:42:49 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:49.675 20:42:49 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:49.675 20:42:49 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:49.675 20:42:49 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:49.675 20:42:49 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:49.675 20:42:49 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:49.675 20:42:49 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:21:49.675 20:42:49 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:49.675 20:42:49 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:21:49.675 20:42:49 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:49.675 20:42:49 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:49.675 20:42:49 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:21:49.675 20:42:49 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:21:49.675 20:42:49 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:21:49.675 20:42:49 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:21:49.675 20:42:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:49.675 20:42:49 -- common/autotest_common.sh@10 -- # set +x 00:21:49.675 20:42:49 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:21:49.675 20:42:49 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:21:49.675 20:42:49 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:21:49.675 20:42:49 -- common/autotest_common.sh@10 -- # set +x 00:21:51.579 INFO: APP EXITING 00:21:51.579 INFO: killing all VMs 00:21:51.579 INFO: killing vhost app 00:21:51.579 INFO: EXIT DONE 00:21:52.147 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:52.147 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:52.147 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:53.083 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:53.083 Cleaning 00:21:53.083 Removing: /var/run/dpdk/spdk0/config 00:21:53.083 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:53.083 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:53.083 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:53.083 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:53.083 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:53.083 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:53.083 Removing: /var/run/dpdk/spdk1/config 00:21:53.083 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:53.083 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:53.083 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:53.083 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:53.083 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:53.083 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:53.083 Removing: /var/run/dpdk/spdk2/config 00:21:53.083 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:53.083 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:53.083 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:53.083 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:53.083 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:53.083 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:53.083 Removing: /var/run/dpdk/spdk3/config 00:21:53.083 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:53.083 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:53.083 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:53.083 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:53.083 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:53.083 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:53.083 Removing: /var/run/dpdk/spdk4/config 00:21:53.083 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:53.083 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:53.083 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:53.083 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:53.083 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:53.083 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:53.083 Removing: /dev/shm/nvmf_trace.0 00:21:53.083 Removing: /dev/shm/spdk_tgt_trace.pid56724 00:21:53.083 Removing: /var/run/dpdk/spdk0 00:21:53.083 Removing: /var/run/dpdk/spdk1 00:21:53.083 Removing: /var/run/dpdk/spdk2 00:21:53.083 Removing: /var/run/dpdk/spdk3 00:21:53.083 Removing: /var/run/dpdk/spdk4 00:21:53.083 Removing: /var/run/dpdk/spdk_pid56565 00:21:53.083 Removing: /var/run/dpdk/spdk_pid56724 00:21:53.083 Removing: /var/run/dpdk/spdk_pid56922 00:21:53.083 Removing: /var/run/dpdk/spdk_pid57003 00:21:53.083 Removing: /var/run/dpdk/spdk_pid57029 00:21:53.083 Removing: /var/run/dpdk/spdk_pid57137 00:21:53.083 Removing: /var/run/dpdk/spdk_pid57149 00:21:53.083 Removing: /var/run/dpdk/spdk_pid57283 00:21:53.083 Removing: /var/run/dpdk/spdk_pid57484 00:21:53.083 Removing: /var/run/dpdk/spdk_pid57638 00:21:53.083 Removing: /var/run/dpdk/spdk_pid57716 00:21:53.083 Removing: /var/run/dpdk/spdk_pid57800 00:21:53.083 Removing: /var/run/dpdk/spdk_pid57886 00:21:53.083 Removing: /var/run/dpdk/spdk_pid57971 00:21:53.083 Removing: /var/run/dpdk/spdk_pid58004 00:21:53.083 Removing: /var/run/dpdk/spdk_pid58039 00:21:53.083 Removing: /var/run/dpdk/spdk_pid58109 00:21:53.083 Removing: /var/run/dpdk/spdk_pid58201 00:21:53.083 Removing: /var/run/dpdk/spdk_pid58651 00:21:53.083 Removing: /var/run/dpdk/spdk_pid58695 00:21:53.083 Removing: /var/run/dpdk/spdk_pid58746 00:21:53.083 Removing: /var/run/dpdk/spdk_pid58755 00:21:53.083 Removing: /var/run/dpdk/spdk_pid58822 00:21:53.083 Removing: /var/run/dpdk/spdk_pid58838 00:21:53.083 Removing: /var/run/dpdk/spdk_pid58905 00:21:53.083 Removing: /var/run/dpdk/spdk_pid58921 00:21:53.083 Removing: /var/run/dpdk/spdk_pid58966 00:21:53.083 Removing: /var/run/dpdk/spdk_pid58990 00:21:53.083 Removing: /var/run/dpdk/spdk_pid59030 00:21:53.083 Removing: /var/run/dpdk/spdk_pid59048 00:21:53.083 Removing: /var/run/dpdk/spdk_pid59184 00:21:53.083 Removing: /var/run/dpdk/spdk_pid59214 00:21:53.083 Removing: /var/run/dpdk/spdk_pid59302 00:21:53.083 Removing: /var/run/dpdk/spdk_pid59628 00:21:53.083 Removing: /var/run/dpdk/spdk_pid59646 00:21:53.083 Removing: /var/run/dpdk/spdk_pid59677 00:21:53.083 Removing: /var/run/dpdk/spdk_pid59696 00:21:53.083 Removing: /var/run/dpdk/spdk_pid59706 00:21:53.083 Removing: /var/run/dpdk/spdk_pid59725 00:21:53.083 Removing: /var/run/dpdk/spdk_pid59744 00:21:53.083 Removing: /var/run/dpdk/spdk_pid59765 00:21:53.083 Removing: /var/run/dpdk/spdk_pid59784 00:21:53.083 Removing: /var/run/dpdk/spdk_pid59792 00:21:53.083 Removing: /var/run/dpdk/spdk_pid59813 00:21:53.083 Removing: /var/run/dpdk/spdk_pid59832 00:21:53.083 Removing: /var/run/dpdk/spdk_pid59851 00:21:53.083 Removing: /var/run/dpdk/spdk_pid59861 00:21:53.083 Removing: /var/run/dpdk/spdk_pid59880 00:21:53.083 Removing: /var/run/dpdk/spdk_pid59899 00:21:53.083 Removing: /var/run/dpdk/spdk_pid59920 00:21:53.342 Removing: /var/run/dpdk/spdk_pid59939 00:21:53.342 Removing: /var/run/dpdk/spdk_pid59949 00:21:53.342 Removing: /var/run/dpdk/spdk_pid59970 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60006 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60014 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60049 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60122 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60151 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60160 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60189 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60198 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60210 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60248 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60269 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60298 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60307 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60318 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60327 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60337 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60346 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60356 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60365 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60394 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60420 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60430 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60464 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60472 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60481 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60521 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60533 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60559 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60567 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60580 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60582 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60595 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60597 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60610 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60612 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60694 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60747 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60865 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60901 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60941 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60961 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60983 00:21:53.342 Removing: /var/run/dpdk/spdk_pid60998 00:21:53.342 Removing: /var/run/dpdk/spdk_pid61029 00:21:53.342 Removing: /var/run/dpdk/spdk_pid61050 00:21:53.342 Removing: /var/run/dpdk/spdk_pid61129 00:21:53.342 Removing: /var/run/dpdk/spdk_pid61147 00:21:53.342 Removing: /var/run/dpdk/spdk_pid61191 00:21:53.342 Removing: /var/run/dpdk/spdk_pid61271 00:21:53.342 Removing: /var/run/dpdk/spdk_pid61335 00:21:53.342 Removing: /var/run/dpdk/spdk_pid61364 00:21:53.342 Removing: /var/run/dpdk/spdk_pid61464 00:21:53.342 Removing: /var/run/dpdk/spdk_pid61507 00:21:53.342 Removing: /var/run/dpdk/spdk_pid61545 00:21:53.342 Removing: /var/run/dpdk/spdk_pid61771 00:21:53.342 Removing: /var/run/dpdk/spdk_pid61869 00:21:53.342 Removing: /var/run/dpdk/spdk_pid61903 00:21:53.342 Removing: /var/run/dpdk/spdk_pid61927 00:21:53.342 Removing: /var/run/dpdk/spdk_pid61966 00:21:53.342 Removing: /var/run/dpdk/spdk_pid61994 00:21:53.342 Removing: /var/run/dpdk/spdk_pid62033 00:21:53.342 Removing: /var/run/dpdk/spdk_pid62059 00:21:53.342 Removing: /var/run/dpdk/spdk_pid62452 00:21:53.342 Removing: /var/run/dpdk/spdk_pid62492 00:21:53.342 Removing: /var/run/dpdk/spdk_pid62844 00:21:53.342 Removing: /var/run/dpdk/spdk_pid63310 00:21:53.342 Removing: /var/run/dpdk/spdk_pid63593 00:21:53.342 Removing: /var/run/dpdk/spdk_pid64455 00:21:53.342 Removing: /var/run/dpdk/spdk_pid65377 00:21:53.342 Removing: /var/run/dpdk/spdk_pid65494 00:21:53.342 Removing: /var/run/dpdk/spdk_pid65562 00:21:53.342 Removing: /var/run/dpdk/spdk_pid66971 00:21:53.342 Removing: /var/run/dpdk/spdk_pid67285 00:21:53.342 Removing: /var/run/dpdk/spdk_pid71098 00:21:53.342 Removing: /var/run/dpdk/spdk_pid71483 00:21:53.342 Removing: /var/run/dpdk/spdk_pid71593 00:21:53.342 Removing: /var/run/dpdk/spdk_pid71720 00:21:53.342 Removing: /var/run/dpdk/spdk_pid71749 00:21:53.342 Removing: /var/run/dpdk/spdk_pid71770 00:21:53.342 Removing: /var/run/dpdk/spdk_pid71804 00:21:53.342 Removing: /var/run/dpdk/spdk_pid71909 00:21:53.342 Removing: /var/run/dpdk/spdk_pid72046 00:21:53.342 Removing: /var/run/dpdk/spdk_pid72188 00:21:53.342 Removing: /var/run/dpdk/spdk_pid72264 00:21:53.342 Removing: /var/run/dpdk/spdk_pid72464 00:21:53.342 Removing: /var/run/dpdk/spdk_pid72546 00:21:53.601 Removing: /var/run/dpdk/spdk_pid72641 00:21:53.601 Removing: /var/run/dpdk/spdk_pid72995 00:21:53.601 Removing: /var/run/dpdk/spdk_pid73409 00:21:53.601 Removing: /var/run/dpdk/spdk_pid73410 00:21:53.601 Removing: /var/run/dpdk/spdk_pid73411 00:21:53.601 Removing: /var/run/dpdk/spdk_pid73672 00:21:53.601 Removing: /var/run/dpdk/spdk_pid73944 00:21:53.601 Removing: /var/run/dpdk/spdk_pid74322 00:21:53.601 Removing: /var/run/dpdk/spdk_pid74324 00:21:53.601 Removing: /var/run/dpdk/spdk_pid74657 00:21:53.601 Removing: /var/run/dpdk/spdk_pid74677 00:21:53.601 Removing: /var/run/dpdk/spdk_pid74691 00:21:53.601 Removing: /var/run/dpdk/spdk_pid74725 00:21:53.601 Removing: /var/run/dpdk/spdk_pid74735 00:21:53.601 Removing: /var/run/dpdk/spdk_pid75085 00:21:53.601 Removing: /var/run/dpdk/spdk_pid75132 00:21:53.601 Removing: /var/run/dpdk/spdk_pid75458 00:21:53.601 Removing: /var/run/dpdk/spdk_pid75648 00:21:53.601 Removing: /var/run/dpdk/spdk_pid76079 00:21:53.601 Removing: /var/run/dpdk/spdk_pid76634 00:21:53.601 Removing: /var/run/dpdk/spdk_pid77517 00:21:53.601 Removing: /var/run/dpdk/spdk_pid78155 00:21:53.601 Removing: /var/run/dpdk/spdk_pid78157 00:21:53.601 Removing: /var/run/dpdk/spdk_pid80176 00:21:53.601 Removing: /var/run/dpdk/spdk_pid80229 00:21:53.601 Removing: /var/run/dpdk/spdk_pid80282 00:21:53.601 Removing: /var/run/dpdk/spdk_pid80330 00:21:53.601 Removing: /var/run/dpdk/spdk_pid80448 00:21:53.601 Removing: /var/run/dpdk/spdk_pid80504 00:21:53.601 Removing: /var/run/dpdk/spdk_pid80557 00:21:53.601 Removing: /var/run/dpdk/spdk_pid80617 00:21:53.601 Removing: /var/run/dpdk/spdk_pid80996 00:21:53.601 Removing: /var/run/dpdk/spdk_pid82202 00:21:53.601 Removing: /var/run/dpdk/spdk_pid82341 00:21:53.601 Removing: /var/run/dpdk/spdk_pid82577 00:21:53.601 Removing: /var/run/dpdk/spdk_pid83186 00:21:53.601 Removing: /var/run/dpdk/spdk_pid83348 00:21:53.601 Removing: /var/run/dpdk/spdk_pid83509 00:21:53.601 Removing: /var/run/dpdk/spdk_pid83606 00:21:53.601 Removing: /var/run/dpdk/spdk_pid83764 00:21:53.601 Removing: /var/run/dpdk/spdk_pid83875 00:21:53.601 Removing: /var/run/dpdk/spdk_pid84572 00:21:53.601 Removing: /var/run/dpdk/spdk_pid84615 00:21:53.601 Removing: /var/run/dpdk/spdk_pid84649 00:21:53.601 Removing: /var/run/dpdk/spdk_pid84901 00:21:53.601 Removing: /var/run/dpdk/spdk_pid84937 00:21:53.601 Removing: /var/run/dpdk/spdk_pid84971 00:21:53.601 Removing: /var/run/dpdk/spdk_pid85441 00:21:53.601 Removing: /var/run/dpdk/spdk_pid85452 00:21:53.601 Removing: /var/run/dpdk/spdk_pid85708 00:21:53.601 Removing: /var/run/dpdk/spdk_pid85834 00:21:53.601 Removing: /var/run/dpdk/spdk_pid85852 00:21:53.601 Clean 00:21:53.601 20:42:53 -- common/autotest_common.sh@1453 -- # return 0 00:21:53.601 20:42:53 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:21:53.601 20:42:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:53.601 20:42:53 -- common/autotest_common.sh@10 -- # set +x 00:21:53.601 20:42:53 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:21:53.601 20:42:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:53.601 20:42:53 -- common/autotest_common.sh@10 -- # set +x 00:21:53.860 20:42:53 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:53.860 20:42:53 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:53.860 20:42:53 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:53.860 20:42:53 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:21:53.860 20:42:53 -- spdk/autotest.sh@398 -- # hostname 00:21:53.860 20:42:54 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:54.119 geninfo: WARNING: invalid characters removed from testname! 00:22:20.675 20:43:16 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:20.675 20:43:19 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:22.053 20:43:22 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:25.339 20:43:25 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:27.242 20:43:27 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:29.775 20:43:29 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:32.306 20:43:32 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:32.306 20:43:32 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:32.306 20:43:32 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:22:32.306 20:43:32 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:32.306 20:43:32 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:32.306 20:43:32 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:32.306 + [[ -n 5198 ]] 00:22:32.306 + sudo kill 5198 00:22:32.315 [Pipeline] } 00:22:32.331 [Pipeline] // timeout 00:22:32.336 [Pipeline] } 00:22:32.349 [Pipeline] // stage 00:22:32.354 [Pipeline] } 00:22:32.367 [Pipeline] // catchError 00:22:32.377 [Pipeline] stage 00:22:32.379 [Pipeline] { (Stop VM) 00:22:32.391 [Pipeline] sh 00:22:32.668 + vagrant halt 00:22:35.201 ==> default: Halting domain... 00:22:39.430 [Pipeline] sh 00:22:39.709 + vagrant destroy -f 00:22:42.994 ==> default: Removing domain... 00:22:43.006 [Pipeline] sh 00:22:43.287 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:22:43.296 [Pipeline] } 00:22:43.310 [Pipeline] // stage 00:22:43.316 [Pipeline] } 00:22:43.329 [Pipeline] // dir 00:22:43.334 [Pipeline] } 00:22:43.347 [Pipeline] // wrap 00:22:43.352 [Pipeline] } 00:22:43.364 [Pipeline] // catchError 00:22:43.374 [Pipeline] stage 00:22:43.377 [Pipeline] { (Epilogue) 00:22:43.389 [Pipeline] sh 00:22:43.669 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:48.954 [Pipeline] catchError 00:22:48.956 [Pipeline] { 00:22:48.968 [Pipeline] sh 00:22:49.249 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:49.249 Artifacts sizes are good 00:22:49.259 [Pipeline] } 00:22:49.272 [Pipeline] // catchError 00:22:49.284 [Pipeline] archiveArtifacts 00:22:49.291 Archiving artifacts 00:22:49.434 [Pipeline] cleanWs 00:22:49.445 [WS-CLEANUP] Deleting project workspace... 00:22:49.445 [WS-CLEANUP] Deferred wipeout is used... 00:22:49.450 [WS-CLEANUP] done 00:22:49.452 [Pipeline] } 00:22:49.466 [Pipeline] // stage 00:22:49.471 [Pipeline] } 00:22:49.484 [Pipeline] // node 00:22:49.489 [Pipeline] End of Pipeline 00:22:49.522 Finished: SUCCESS